Mar 14 00:35:55.740237 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:35:55.740265 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:35:55.740281 kernel: BIOS-provided physical RAM map: Mar 14 00:35:55.740290 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:35:55.740299 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:35:55.740307 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:35:55.740317 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:35:55.740327 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:35:55.740336 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:35:55.740348 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:35:55.740357 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:35:55.740365 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:35:55.740374 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:35:55.740383 kernel: NX (Execute Disable) protection: active Mar 14 00:35:55.740394 kernel: APIC: Static calls initialized Mar 14 00:35:55.740407 kernel: SMBIOS 2.8 present. Mar 14 00:35:55.740417 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:35:55.740426 kernel: Hypervisor detected: KVM Mar 14 00:35:55.740436 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:35:55.740445 kernel: kvm-clock: using sched offset of 6707458864 cycles Mar 14 00:35:55.740455 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:35:55.740465 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:35:55.740475 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:35:55.740485 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:35:55.740498 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:35:55.740508 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:35:55.740518 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:35:55.740527 kernel: Using GB pages for direct mapping Mar 14 00:35:55.740537 kernel: ACPI: Early table checksum verification disabled Mar 14 00:35:55.740547 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:35:55.740599 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740611 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740621 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740635 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:35:55.740645 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740655 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740665 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740674 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:55.740684 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:35:55.740694 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:35:55.755710 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:35:55.755726 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:35:55.755737 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:35:55.755748 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:35:55.755758 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:35:55.755803 kernel: No NUMA configuration found Mar 14 00:35:55.755814 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:35:55.755829 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:35:55.755839 kernel: Zone ranges: Mar 14 00:35:55.755850 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:35:55.755860 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:35:55.755870 kernel: Normal empty Mar 14 00:35:55.755881 kernel: Movable zone start for each node Mar 14 00:35:55.755891 kernel: Early memory node ranges Mar 14 00:35:55.755901 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:35:55.755911 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:35:55.755921 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:35:55.755935 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:35:55.755945 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:35:55.755956 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:35:55.755966 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:35:55.755976 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:35:55.755987 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:35:55.755997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:35:55.756008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:35:55.756018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:35:55.756032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:35:55.756042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:35:55.756052 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:35:55.756063 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:35:55.756073 kernel: TSC deadline timer available Mar 14 00:35:55.756083 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:35:55.756093 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:35:55.756104 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:35:55.756114 kernel: kvm-guest: setup PV sched yield Mar 14 00:35:55.756127 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:35:55.756138 kernel: Booting paravirtualized kernel on KVM Mar 14 00:35:55.756148 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:35:55.756159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:35:55.756169 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:35:55.756180 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:35:55.756190 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:35:55.756200 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:35:55.756210 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:35:55.756225 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:35:55.756236 kernel: random: crng init done Mar 14 00:35:55.756246 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:35:55.756256 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:35:55.756266 kernel: Fallback order for Node 0: 0 Mar 14 00:35:55.756276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:35:55.756287 kernel: Policy zone: DMA32 Mar 14 00:35:55.756297 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:35:55.756311 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 14 00:35:55.756321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:35:55.756332 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:35:55.756342 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:35:55.756352 kernel: Dynamic Preempt: voluntary Mar 14 00:35:55.756362 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:35:55.756374 kernel: rcu: RCU event tracing is enabled. Mar 14 00:35:55.756384 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:35:55.756395 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:35:55.756408 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:35:55.756418 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:35:55.756429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:35:55.756439 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:35:55.756449 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:35:55.756460 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:35:55.756470 kernel: Console: colour VGA+ 80x25 Mar 14 00:35:55.756480 kernel: printk: console [ttyS0] enabled Mar 14 00:35:55.756490 kernel: ACPI: Core revision 20230628 Mar 14 00:35:55.756501 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:35:55.756514 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:35:55.756524 kernel: x2apic enabled Mar 14 00:35:55.756535 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:35:55.756545 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:35:55.756604 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:35:55.756618 kernel: kvm-guest: setup PV IPIs Mar 14 00:35:55.756629 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:35:55.756653 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:35:55.756664 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:35:55.756675 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:35:55.756686 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:35:55.756700 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:35:55.756711 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:35:55.756721 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:35:55.756733 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:35:55.756743 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:35:55.756757 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:35:55.756799 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:35:55.756810 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:35:55.756821 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:35:55.756832 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:35:55.756843 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:35:55.756854 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:35:55.756865 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:35:55.756880 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:35:55.756891 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:35:55.756902 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:35:55.756913 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:35:55.756924 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:35:55.756935 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:35:55.756946 kernel: landlock: Up and running. Mar 14 00:35:55.756956 kernel: SELinux: Initializing. Mar 14 00:35:55.756968 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:35:55.756982 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:35:55.756993 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:35:55.757003 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:35:55.757014 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:35:55.757026 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:35:55.757037 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:35:55.757047 kernel: signal: max sigframe size: 1776 Mar 14 00:35:55.757058 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:35:55.757069 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:35:55.757083 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:35:55.757094 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:35:55.757105 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:35:55.757115 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:35:55.757126 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:35:55.757137 kernel: smpboot: Max logical packages: 1 Mar 14 00:35:55.757148 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:35:55.757158 kernel: devtmpfs: initialized Mar 14 00:35:55.757169 kernel: x86/mm: Memory block size: 128MB Mar 14 00:35:55.757183 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:35:55.757194 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:35:55.757205 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:35:55.757216 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:35:55.757227 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:35:55.757238 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:35:55.757249 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:35:55.757260 kernel: audit: type=2000 audit(1773448552.111:1): state=initialized audit_enabled=0 res=1 Mar 14 00:35:55.757271 kernel: cpuidle: using governor menu Mar 14 00:35:55.757285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:35:55.757296 kernel: dca service started, version 1.12.1 Mar 14 00:35:55.757307 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:35:55.757318 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:35:55.757329 kernel: PCI: Using configuration type 1 for base access Mar 14 00:35:55.757340 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:35:55.757350 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:35:55.757361 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:35:55.757372 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:35:55.757386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:35:55.757397 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:35:55.757408 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:35:55.757418 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:35:55.757429 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:35:55.757440 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:35:55.757451 kernel: ACPI: Interpreter enabled Mar 14 00:35:55.757461 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:35:55.757472 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:35:55.757486 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:35:55.757497 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:35:55.757507 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:35:55.757518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:35:55.757853 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:35:55.758047 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:35:55.758218 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:35:55.758235 kernel: PCI host bridge to bus 0000:00 Mar 14 00:35:55.758406 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:35:55.758601 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:35:55.758760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:35:55.767752 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:35:55.768821 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:35:55.773527 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:35:55.773799 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:35:55.773997 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:35:55.774172 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:35:55.774332 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:35:55.774489 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:35:55.774698 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:35:55.774899 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:35:55.775079 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:35:55.775241 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:35:55.778506 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:35:55.778740 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:35:55.779286 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:35:55.779457 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:35:55.779668 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:35:55.779881 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:35:55.780054 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:35:55.780216 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:35:55.780378 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:35:55.780538 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:35:55.780748 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:35:55.784036 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:35:55.784220 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:35:55.784398 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:35:55.784645 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:35:55.784856 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:35:55.785060 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:35:55.785222 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:35:55.785243 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:35:55.785254 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:35:55.785265 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:35:55.785277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:35:55.785288 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:35:55.785298 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:35:55.785309 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:35:55.785320 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:35:55.785331 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:35:55.785345 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:35:55.785356 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:35:55.785367 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:35:55.785378 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:35:55.785389 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:35:55.785400 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:35:55.785410 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:35:55.785421 kernel: iommu: Default domain type: Translated Mar 14 00:35:55.785432 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:35:55.785446 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:35:55.785458 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:35:55.785469 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:35:55.785480 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:35:55.785688 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:35:55.787612 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:35:55.788039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:35:55.788058 kernel: vgaarb: loaded Mar 14 00:35:55.788076 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:35:55.788088 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:35:55.788099 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:35:55.788111 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:35:55.788123 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:35:55.788134 kernel: pnp: PnP ACPI init Mar 14 00:35:55.788553 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:35:55.788744 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:35:55.788757 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:35:55.788806 kernel: NET: Registered PF_INET protocol family Mar 14 00:35:55.788817 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:35:55.788828 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:35:55.788839 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:35:55.788851 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:35:55.788862 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:35:55.788873 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:35:55.788884 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:35:55.788899 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:35:55.788910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:35:55.788921 kernel: NET: Registered PF_XDP protocol family Mar 14 00:35:55.789080 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:35:55.789227 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:35:55.789370 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:35:55.789514 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:35:55.789750 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:35:55.789935 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:35:55.789956 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:35:55.789967 kernel: Initialise system trusted keyrings Mar 14 00:35:55.789978 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:35:55.789989 kernel: Key type asymmetric registered Mar 14 00:35:55.790000 kernel: Asymmetric key parser 'x509' registered Mar 14 00:35:55.790011 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:35:55.790022 kernel: io scheduler mq-deadline registered Mar 14 00:35:55.790033 kernel: io scheduler kyber registered Mar 14 00:35:55.790044 kernel: io scheduler bfq registered Mar 14 00:35:55.790058 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:35:55.790070 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:35:55.790081 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:35:55.790093 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:35:55.790104 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:35:55.790115 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:35:55.790126 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:35:55.790137 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:35:55.790148 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:35:55.790314 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:35:55.790330 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 14 00:35:55.790479 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:35:55.790722 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:35:54 UTC (1773448554) Mar 14 00:35:55.790917 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:35:55.790933 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:35:55.790944 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:35:55.790955 kernel: Segment Routing with IPv6 Mar 14 00:35:55.790971 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:35:55.790982 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:35:55.790993 kernel: Key type dns_resolver registered Mar 14 00:35:55.791005 kernel: IPI shorthand broadcast: enabled Mar 14 00:35:55.791016 kernel: sched_clock: Marking stable (1696019980, 404963313)->(2690983657, -590000364) Mar 14 00:35:55.791026 kernel: registered taskstats version 1 Mar 14 00:35:55.791037 kernel: Loading compiled-in X.509 certificates Mar 14 00:35:55.791048 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:35:55.791059 kernel: Key type .fscrypt registered Mar 14 00:35:55.791073 kernel: Key type fscrypt-provisioning registered Mar 14 00:35:55.791084 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:35:55.791095 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:35:55.791106 kernel: ima: No architecture policies found Mar 14 00:35:55.791117 kernel: clk: Disabling unused clocks Mar 14 00:35:55.791128 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:35:55.791139 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:35:55.791150 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:35:55.791164 kernel: Run /init as init process Mar 14 00:35:55.791175 kernel: with arguments: Mar 14 00:35:55.791186 kernel: /init Mar 14 00:35:55.791197 kernel: with environment: Mar 14 00:35:55.791207 kernel: HOME=/ Mar 14 00:35:55.791218 kernel: TERM=linux Mar 14 00:35:55.791230 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:35:55.791244 systemd[1]: Detected virtualization kvm. Mar 14 00:35:55.791258 systemd[1]: Detected architecture x86-64. Mar 14 00:35:55.791270 systemd[1]: Running in initrd. Mar 14 00:35:55.791281 systemd[1]: No hostname configured, using default hostname. Mar 14 00:35:55.791292 systemd[1]: Hostname set to . Mar 14 00:35:55.791303 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:35:55.791315 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:35:55.791327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:35:55.791338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:35:55.791355 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:35:55.791367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:35:55.791379 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:35:55.791391 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:35:55.791405 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:35:55.791417 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:35:55.791429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:35:55.791445 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:35:55.791457 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:35:55.791469 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:35:55.791481 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:35:55.791508 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:35:55.791524 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:35:55.791539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:35:55.791552 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:35:55.791657 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:35:55.791671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:35:55.791683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:35:55.791695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:35:55.791707 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:35:55.791719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:35:55.791732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:35:55.791749 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:35:55.791761 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:35:55.791805 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:35:55.791818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:35:55.791830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:35:55.791843 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:35:55.791883 systemd-journald[195]: Collecting audit messages is disabled. Mar 14 00:35:55.791915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:35:55.791927 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:35:55.791940 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:35:55.791956 systemd-journald[195]: Journal started Mar 14 00:35:55.791980 systemd-journald[195]: Runtime Journal (/run/log/journal/441cd8d701504435a1af37bb950625d6) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:35:55.727438 systemd-modules-load[196]: Inserted module 'overlay' Mar 14 00:35:55.962402 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:35:55.962442 kernel: Bridge firewalling registered Mar 14 00:35:55.962459 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:35:55.811118 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 14 00:35:55.969156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:35:55.973230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:35:55.992841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:35:56.002533 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:35:56.006071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:35:56.010280 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:35:56.016350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:35:56.036388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:35:56.047339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:35:56.054980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:35:56.084068 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:35:56.091994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:35:56.109757 dracut-cmdline[229]: dracut-dracut-053 Mar 14 00:35:56.105650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:35:56.120677 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:35:56.171027 systemd-resolved[240]: Positive Trust Anchors: Mar 14 00:35:56.171065 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:35:56.171115 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:35:56.174119 systemd-resolved[240]: Defaulting to hostname 'linux'. Mar 14 00:35:56.175519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:35:56.180129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:35:56.319666 kernel: SCSI subsystem initialized Mar 14 00:35:56.332694 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:35:56.357242 kernel: iscsi: registered transport (tcp) Mar 14 00:35:56.391805 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:35:56.391885 kernel: QLogic iSCSI HBA Driver Mar 14 00:35:56.489090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:35:56.515933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:35:56.553638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:35:56.553720 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:35:56.556628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:35:56.617834 kernel: raid6: avx2x4 gen() 25826 MB/s Mar 14 00:35:56.634653 kernel: raid6: avx2x2 gen() 25713 MB/s Mar 14 00:35:56.654379 kernel: raid6: avx2x1 gen() 12166 MB/s Mar 14 00:35:56.654449 kernel: raid6: using algorithm avx2x4 gen() 25826 MB/s Mar 14 00:35:56.674048 kernel: raid6: .... xor() 4785 MB/s, rmw enabled Mar 14 00:35:56.674133 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:35:56.697655 kernel: xor: automatically using best checksumming function avx Mar 14 00:35:56.889642 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:35:56.910150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:35:56.928212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:35:56.943252 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 14 00:35:56.951181 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:35:56.970951 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:35:56.989823 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Mar 14 00:35:57.034922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:35:57.054985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:35:57.171365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:35:57.186118 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:35:57.205738 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:35:57.214248 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:35:57.221049 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:35:57.228517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:35:57.240764 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:35:57.252352 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:35:57.252539 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:35:57.251827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:35:57.252738 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:35:57.269607 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:35:57.269874 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:35:57.298443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:35:57.298476 kernel: GPT:9289727 != 19775487 Mar 14 00:35:57.298493 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:35:57.298509 kernel: GPT:9289727 != 19775487 Mar 14 00:35:57.298665 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:35:57.298687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:57.279820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:35:57.304738 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:35:57.279916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:35:57.289951 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:35:57.312072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:35:57.312556 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:35:57.331210 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (465) Mar 14 00:35:57.338634 kernel: AES CTR mode by8 optimization enabled Mar 14 00:35:57.344668 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 14 00:35:57.361656 kernel: libata version 3.00 loaded. Mar 14 00:35:57.368743 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:35:57.368983 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:35:57.372481 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:35:57.528862 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:35:57.529076 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:35:57.529227 kernel: scsi host0: ahci Mar 14 00:35:57.529392 kernel: scsi host1: ahci Mar 14 00:35:57.529544 kernel: scsi host2: ahci Mar 14 00:35:57.529818 kernel: scsi host3: ahci Mar 14 00:35:57.529991 kernel: scsi host4: ahci Mar 14 00:35:57.530135 kernel: scsi host5: ahci Mar 14 00:35:57.530278 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:35:57.530289 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:35:57.530305 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:35:57.530314 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:35:57.530324 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:35:57.530333 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:35:57.518906 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:35:57.528841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:35:57.532323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:35:57.544167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:35:57.557211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:35:57.575891 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:35:57.577216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:35:57.594434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:57.594473 disk-uuid[558]: Primary Header is updated. Mar 14 00:35:57.594473 disk-uuid[558]: Secondary Entries is updated. Mar 14 00:35:57.594473 disk-uuid[558]: Secondary Header is updated. Mar 14 00:35:57.604237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:57.614934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:35:57.691853 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:35:57.691922 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:57.692650 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:57.697687 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:57.700621 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:57.700654 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:35:57.704319 kernel: ata3.00: applying bridge limits Mar 14 00:35:57.706649 kernel: ata3.00: configured for UDMA/100 Mar 14 00:35:57.709652 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:57.713693 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:35:57.772931 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:35:57.773354 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:35:57.793637 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:35:58.605637 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:58.607997 disk-uuid[559]: The operation has completed successfully. Mar 14 00:35:58.654804 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:35:58.654958 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:35:58.687900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:35:58.695917 sh[594]: Success Mar 14 00:35:58.719634 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:35:58.770458 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:35:58.789838 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:35:58.795941 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:35:58.815874 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:35:58.815927 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:58.815944 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:35:58.821732 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:35:58.821772 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:35:58.833075 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:35:58.833972 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:35:58.852839 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:35:58.856652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:35:58.888628 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:58.888720 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:58.888740 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:35:58.897610 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:35:58.911112 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:35:58.917982 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:58.930262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:35:58.938898 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:35:59.013632 ignition[709]: Ignition 2.19.0 Mar 14 00:35:59.013646 ignition[709]: Stage: fetch-offline Mar 14 00:35:59.013713 ignition[709]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:59.013731 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:59.013905 ignition[709]: parsed url from cmdline: "" Mar 14 00:35:59.013911 ignition[709]: no config URL provided Mar 14 00:35:59.013920 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:35:59.013936 ignition[709]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:35:59.031959 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:35:59.013975 ignition[709]: op(1): [started] loading QEMU firmware config module Mar 14 00:35:59.013985 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:35:59.024632 ignition[709]: op(1): [finished] loading QEMU firmware config module Mar 14 00:35:59.050996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:35:59.087145 systemd-networkd[783]: lo: Link UP Mar 14 00:35:59.087176 systemd-networkd[783]: lo: Gained carrier Mar 14 00:35:59.089429 systemd-networkd[783]: Enumeration completed Mar 14 00:35:59.089688 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:35:59.090670 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:35:59.090676 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:35:59.092257 systemd-networkd[783]: eth0: Link UP Mar 14 00:35:59.092263 systemd-networkd[783]: eth0: Gained carrier Mar 14 00:35:59.092272 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:35:59.095120 systemd[1]: Reached target network.target - Network. Mar 14 00:35:59.137663 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:35:59.205947 systemd-resolved[240]: Detected conflict on linux IN A 10.0.0.132 Mar 14 00:35:59.205984 systemd-resolved[240]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Mar 14 00:35:59.253676 ignition[709]: parsing config with SHA512: ca3703f9426eb6b9e884e0af415f28e04cef41e210a426917b3d46233afa8bdea5adf95e289c77022ae7973ce71d96aaf43b0329ffffba01a3c4513ee2198b5f Mar 14 00:35:59.258648 unknown[709]: fetched base config from "system" Mar 14 00:35:59.258842 unknown[709]: fetched user config from "qemu" Mar 14 00:35:59.260139 ignition[709]: fetch-offline: fetch-offline passed Mar 14 00:35:59.263533 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:35:59.260214 ignition[709]: Ignition finished successfully Mar 14 00:35:59.270441 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:35:59.280928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:35:59.303450 ignition[787]: Ignition 2.19.0 Mar 14 00:35:59.303482 ignition[787]: Stage: kargs Mar 14 00:35:59.303853 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:59.303869 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:59.305089 ignition[787]: kargs: kargs passed Mar 14 00:35:59.305142 ignition[787]: Ignition finished successfully Mar 14 00:35:59.322927 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:35:59.334904 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:35:59.351091 ignition[795]: Ignition 2.19.0 Mar 14 00:35:59.351118 ignition[795]: Stage: disks Mar 14 00:35:59.351347 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:59.354414 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:35:59.351367 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:59.357626 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:35:59.352462 ignition[795]: disks: disks passed Mar 14 00:35:59.362862 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:35:59.352517 ignition[795]: Ignition finished successfully Mar 14 00:35:59.366043 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:35:59.368818 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:35:59.368931 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:35:59.388953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:35:59.412419 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:35:59.411702 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:35:59.420997 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:35:59.546630 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:35:59.547301 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:35:59.548237 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:35:59.562839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:35:59.567083 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:35:59.596766 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 14 00:35:59.596837 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:59.596867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:59.596882 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:35:59.596892 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:35:59.589327 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:35:59.589405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:35:59.589449 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:35:59.615298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:35:59.619994 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:35:59.636965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:35:59.697252 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:35:59.705638 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:35:59.720255 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:35:59.732448 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:35:59.919969 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:35:59.945896 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:35:59.953543 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:35:59.982408 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:35:59.989892 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:36:00.023194 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:36:00.034674 ignition[927]: INFO : Ignition 2.19.0 Mar 14 00:36:00.034674 ignition[927]: INFO : Stage: mount Mar 14 00:36:00.040394 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:36:00.040394 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:36:00.048678 ignition[927]: INFO : mount: mount passed Mar 14 00:36:00.051122 ignition[927]: INFO : Ignition finished successfully Mar 14 00:36:00.055529 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:36:00.066849 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:36:00.079040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:36:00.106691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 14 00:36:00.106757 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:36:00.106774 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:36:00.111412 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:36:00.118661 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:36:00.120248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:36:00.156062 ignition[957]: INFO : Ignition 2.19.0 Mar 14 00:36:00.161074 ignition[957]: INFO : Stage: files Mar 14 00:36:00.161074 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:36:00.161074 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:36:00.171077 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:36:00.175927 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:36:00.175927 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:36:00.188153 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:36:00.193443 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:36:00.198716 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:36:00.198716 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:36:00.198716 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:36:00.194383 unknown[957]: wrote ssh authorized keys file for user: core Mar 14 00:36:00.244210 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:36:00.290885 systemd-networkd[783]: eth0: Gained IPv6LL Mar 14 00:36:00.467040 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:36:00.467040 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:36:00.482472 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:36:00.813972 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:36:01.486197 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:36:01.486197 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:36:01.503524 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:36:01.619114 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:36:01.629124 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:36:01.636781 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:36:01.636781 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:36:01.636781 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:36:01.636781 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:36:01.636781 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:36:01.636781 ignition[957]: INFO : files: files passed Mar 14 00:36:01.636781 ignition[957]: INFO : Ignition finished successfully Mar 14 00:36:01.676130 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:36:01.692975 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:36:01.696634 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:36:01.721489 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:36:01.725257 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:36:01.730650 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:36:01.730650 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:36:01.756672 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:36:01.730659 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:36:01.736142 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:36:01.746442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:36:01.782892 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:36:01.825775 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:36:01.826009 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:36:01.836453 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:36:01.845507 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:36:01.845911 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:36:01.868370 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:36:01.891063 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:36:01.912754 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:36:01.932618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:36:01.937972 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:36:01.945041 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:36:01.949054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:36:01.949282 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:36:01.960474 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:36:01.968207 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:36:01.975120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:36:01.982664 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:36:01.991046 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:36:01.998407 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:36:02.002383 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:36:02.011053 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:36:02.014555 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:36:02.021659 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:36:02.028668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:36:02.028937 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:36:02.034673 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:36:02.041925 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:36:02.049704 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:36:02.049996 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:36:02.055620 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:36:02.055847 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:36:02.061415 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:36:02.061710 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:36:02.068294 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:36:02.073855 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:36:02.078006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:36:02.084980 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:36:02.091136 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:36:02.099163 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:36:02.099343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:36:02.105950 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:36:02.106106 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:36:02.174910 ignition[1011]: INFO : Ignition 2.19.0 Mar 14 00:36:02.174910 ignition[1011]: INFO : Stage: umount Mar 14 00:36:02.174910 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:36:02.174910 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:36:02.112915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:36:02.212647 ignition[1011]: INFO : umount: umount passed Mar 14 00:36:02.212647 ignition[1011]: INFO : Ignition finished successfully Mar 14 00:36:02.113149 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:36:02.119277 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:36:02.119476 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:36:02.141034 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:36:02.146645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:36:02.146896 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:36:02.156988 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:36:02.162741 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:36:02.163257 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:36:02.171319 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:36:02.171488 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:36:02.188204 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:36:02.188462 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:36:02.200077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:36:02.201904 systemd[1]: Stopped target network.target - Network. Mar 14 00:36:02.206347 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:36:02.206427 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:36:02.219035 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:36:02.219172 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:36:02.227202 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:36:02.227294 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:36:02.234144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:36:02.234246 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:36:02.245548 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:36:02.253102 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:36:02.264215 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:36:02.264382 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:36:02.271676 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 14 00:36:02.272150 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:36:02.272380 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:36:02.280346 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:36:02.280496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:36:02.291635 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:36:02.291734 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:36:02.308822 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:36:02.313758 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:36:02.313882 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:36:02.321404 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:36:02.321494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:36:02.324919 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:36:02.324992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:36:02.331836 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:36:02.331917 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:36:02.339727 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:36:02.347602 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:36:02.348457 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:36:02.367033 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:36:02.367182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:36:02.375014 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:36:02.375326 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:36:02.386767 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:36:02.386999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:36:02.394930 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:36:02.395053 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:36:02.400075 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:36:02.400140 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:36:02.538739 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 14 00:36:02.405890 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:36:02.405965 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:36:02.411731 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:36:02.411866 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:36:02.420525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:36:02.420871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:36:02.438834 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:36:02.441287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:36:02.441398 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:36:02.446365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:36:02.446436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:36:02.453073 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:36:02.453257 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:36:02.462834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:36:02.470014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:36:02.496031 systemd[1]: Switching root. Mar 14 00:36:02.597032 systemd-journald[195]: Journal stopped Mar 14 00:36:04.124037 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:36:04.124115 kernel: SELinux: policy capability open_perms=1 Mar 14 00:36:04.124127 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:36:04.124138 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:36:04.124152 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:36:04.124162 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:36:04.124172 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:36:04.124182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:36:04.124193 kernel: audit: type=1403 audit(1773448562.803:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:36:04.124220 systemd[1]: Successfully loaded SELinux policy in 64.326ms. Mar 14 00:36:04.124251 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.056ms. Mar 14 00:36:04.124263 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:36:04.124274 systemd[1]: Detected virtualization kvm. Mar 14 00:36:04.124288 systemd[1]: Detected architecture x86-64. Mar 14 00:36:04.124299 systemd[1]: Detected first boot. Mar 14 00:36:04.124312 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:36:04.124323 zram_generator::config[1055]: No configuration found. Mar 14 00:36:04.124335 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:36:04.124346 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:36:04.124357 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:36:04.124368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:36:04.124381 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:36:04.124392 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:36:04.124402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:36:04.124413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:36:04.124424 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:36:04.124435 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:36:04.124445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:36:04.124456 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:36:04.124468 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:36:04.124480 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:36:04.124490 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:36:04.124506 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:36:04.124516 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:36:04.124527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:36:04.124538 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:36:04.124550 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:36:04.124620 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:36:04.124640 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:36:04.124652 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:36:04.124668 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:36:04.124679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:36:04.124719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:36:04.124740 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:36:04.124751 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:36:04.124778 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:36:04.124836 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:36:04.124857 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:36:04.124876 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:36:04.124894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:36:04.124911 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:36:04.124927 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:36:04.124946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:36:04.124963 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:36:04.124981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:04.125006 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:36:04.125027 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:36:04.125047 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:36:04.125066 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:36:04.125083 systemd[1]: Reached target machines.target - Containers. Mar 14 00:36:04.125097 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:36:04.125109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:36:04.125120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:36:04.125131 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:36:04.125145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:36:04.125156 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:36:04.125167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:36:04.125178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:36:04.125189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:36:04.125200 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:36:04.125211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:36:04.125221 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:36:04.125238 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:36:04.125257 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:36:04.125275 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:36:04.125316 kernel: fuse: init (API version 7.39) Mar 14 00:36:04.125335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:36:04.125355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:36:04.125378 kernel: ACPI: bus type drm_connector registered Mar 14 00:36:04.125392 kernel: loop: module loaded Mar 14 00:36:04.125403 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:36:04.125445 systemd-journald[1139]: Collecting audit messages is disabled. Mar 14 00:36:04.125467 systemd-journald[1139]: Journal started Mar 14 00:36:04.125486 systemd-journald[1139]: Runtime Journal (/run/log/journal/441cd8d701504435a1af37bb950625d6) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:36:03.597086 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:36:03.620110 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:36:03.621098 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:36:03.621682 systemd[1]: systemd-journald.service: Consumed 1.520s CPU time. Mar 14 00:36:04.133655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:36:04.138137 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:36:04.138189 systemd[1]: Stopped verity-setup.service. Mar 14 00:36:04.147775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:04.156992 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:36:04.158468 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:36:04.162271 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:36:04.167174 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:36:04.170938 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:36:04.174951 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:36:04.179108 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:36:04.183079 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:36:04.187409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:36:04.192933 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:36:04.193185 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:36:04.198230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:36:04.198509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:36:04.203230 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:36:04.203493 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:36:04.207477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:36:04.207791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:36:04.212123 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:36:04.212379 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:36:04.215979 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:36:04.216241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:36:04.220036 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:36:04.225098 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:36:04.229599 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:36:04.247382 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:36:04.263054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:36:04.268859 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:36:04.273133 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:36:04.273226 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:36:04.279650 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:36:04.285636 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:36:04.293029 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:36:04.296389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:36:04.299440 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:36:04.304736 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:36:04.308865 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:36:04.310497 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:36:04.315452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:36:04.317917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:36:04.328528 systemd-journald[1139]: Time spent on flushing to /var/log/journal/441cd8d701504435a1af37bb950625d6 is 32.799ms for 941 entries. Mar 14 00:36:04.328528 systemd-journald[1139]: System Journal (/var/log/journal/441cd8d701504435a1af37bb950625d6) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:36:04.387470 systemd-journald[1139]: Received client request to flush runtime journal. Mar 14 00:36:04.324149 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:36:04.334287 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:36:04.347225 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:36:04.355441 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:36:04.370055 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:36:04.375350 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:36:04.383236 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:36:04.387828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:36:04.391689 kernel: loop0: detected capacity change from 0 to 217752 Mar 14 00:36:04.396151 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:36:04.409100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:36:04.422890 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:36:04.428627 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:36:04.435792 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:36:04.445146 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:36:04.471009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:36:04.477023 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:36:04.478678 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:36:04.491650 kernel: loop1: detected capacity change from 0 to 140768 Mar 14 00:36:04.494288 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:36:04.513098 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 14 00:36:04.513122 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 14 00:36:04.525434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:36:04.577643 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:36:04.645613 kernel: loop3: detected capacity change from 0 to 217752 Mar 14 00:36:04.682647 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:36:04.720773 kernel: loop5: detected capacity change from 0 to 142488 Mar 14 00:36:04.754541 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:36:04.755474 (sd-merge)[1193]: Merged extensions into '/usr'. Mar 14 00:36:04.762687 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:36:04.762843 systemd[1]: Reloading... Mar 14 00:36:04.941616 zram_generator::config[1219]: No configuration found. Mar 14 00:36:05.195421 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:36:05.273940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:36:05.357606 systemd[1]: Reloading finished in 593 ms. Mar 14 00:36:05.423228 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:36:05.438771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:36:05.453124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:36:05.504061 systemd[1]: Starting ensure-sysext.service... Mar 14 00:36:05.530114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:36:05.568175 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:36:05.601377 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:36:05.601425 systemd[1]: Reloading... Mar 14 00:36:05.690099 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:36:05.690664 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:36:05.700720 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:36:05.701170 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 14 00:36:05.701273 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 14 00:36:05.717116 systemd-udevd[1259]: Using default interface naming scheme 'v255'. Mar 14 00:36:05.724061 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:36:05.724079 systemd-tmpfiles[1258]: Skipping /boot Mar 14 00:36:05.756667 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:36:05.756883 systemd-tmpfiles[1258]: Skipping /boot Mar 14 00:36:05.834908 zram_generator::config[1292]: No configuration found. Mar 14 00:36:06.063881 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1313) Mar 14 00:36:06.268645 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 00:36:06.291627 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:36:06.324378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:36:06.551647 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:36:06.578061 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:36:06.608386 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:36:06.609934 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:36:06.649105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:36:06.660377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:36:06.663237 systemd[1]: Reloading finished in 1061 ms. Mar 14 00:36:06.743913 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:36:06.762662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:36:06.771429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:36:07.006033 systemd[1]: Finished ensure-sysext.service. Mar 14 00:36:07.015918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:07.025423 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:36:07.040478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:36:07.054145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:36:07.059344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:36:07.130229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:36:07.157890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:36:07.184381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:36:07.202391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:36:07.208546 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:36:07.234938 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:36:07.268475 augenrules[1378]: No rules Mar 14 00:36:07.275001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:36:07.310080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:36:07.400649 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:36:07.417098 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:36:07.433053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:36:07.445040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:07.446692 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:36:07.456056 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:36:07.462775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:36:07.464248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:36:07.469534 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:36:07.469803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:36:07.490202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:36:07.490456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:36:07.498021 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:36:07.499373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:36:07.516904 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:36:07.523504 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:36:07.549504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:36:07.549785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:36:07.641480 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:36:07.698196 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:36:07.959175 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:36:07.960221 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:36:08.056112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:36:08.070078 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:36:08.097151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:36:08.255881 systemd-networkd[1380]: lo: Link UP Mar 14 00:36:08.256332 systemd-networkd[1380]: lo: Gained carrier Mar 14 00:36:08.263306 systemd-networkd[1380]: Enumeration completed Mar 14 00:36:08.264315 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:36:08.264322 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:36:08.264736 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:36:08.315599 systemd-networkd[1380]: eth0: Link UP Mar 14 00:36:08.315731 systemd-networkd[1380]: eth0: Gained carrier Mar 14 00:36:08.318912 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:36:08.339009 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:36:08.515352 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:36:08.527907 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:36:08.575555 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:36:08.581091 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Mar 14 00:36:08.598211 systemd-resolved[1384]: Positive Trust Anchors: Mar 14 00:36:08.598711 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:36:08.598809 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:36:08.603859 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:36:08.603962 systemd-timesyncd[1385]: Initial clock synchronization to Sat 2026-03-14 00:36:08.673191 UTC. Mar 14 00:36:08.615707 systemd-resolved[1384]: Defaulting to hostname 'linux'. Mar 14 00:36:08.623715 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:36:08.633149 systemd[1]: Reached target network.target - Network. Mar 14 00:36:08.636951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:36:08.896970 kernel: hrtimer: interrupt took 5519885 ns Mar 14 00:36:09.021448 kernel: kvm_amd: TSC scaling supported Mar 14 00:36:09.021658 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:36:09.021691 kernel: kvm_amd: Nested Paging enabled Mar 14 00:36:09.021715 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:36:09.023411 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:36:09.399739 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:36:09.483393 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:36:09.498016 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:36:09.523643 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:36:09.573721 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:36:09.586159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:36:09.591872 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:36:09.596654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:36:09.602366 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:36:09.607190 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:36:09.611018 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:36:09.615372 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:36:09.620259 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:36:09.620808 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:36:09.624825 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:36:09.631235 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:36:09.639003 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:36:09.651207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:36:09.658089 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:36:09.664323 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:36:09.669990 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:36:09.675059 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:36:09.676315 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:36:09.679905 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:36:09.680895 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:36:09.702930 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:36:09.714321 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:36:09.730269 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:36:09.744613 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:36:09.750522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:36:09.758095 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:36:09.772742 jq[1425]: false Mar 14 00:36:09.779753 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:36:09.797680 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:36:09.806078 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:36:09.839740 systemd-networkd[1380]: eth0: Gained IPv6LL Mar 14 00:36:09.855832 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:36:09.862513 extend-filesystems[1426]: Found loop3 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found loop4 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found loop5 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found sr0 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda1 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda2 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda3 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found usr Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda4 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda6 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda7 Mar 14 00:36:09.862513 extend-filesystems[1426]: Found vda9 Mar 14 00:36:09.862513 extend-filesystems[1426]: Checking size of /dev/vda9 Mar 14 00:36:09.940177 extend-filesystems[1426]: Resized partition /dev/vda9 Mar 14 00:36:09.955037 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:36:09.867960 dbus-daemon[1424]: [system] SELinux support is enabled Mar 14 00:36:09.864378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:36:09.955559 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:36:09.865134 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:36:09.870099 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:36:09.885109 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:36:09.900229 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:36:09.928373 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:36:09.940964 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:36:09.977351 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:36:09.977738 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:36:09.978305 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:36:09.978673 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:36:09.990469 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:36:09.990806 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:36:10.018475 jq[1441]: true Mar 14 00:36:10.021727 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:36:10.021885 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:36:10.035104 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:36:10.041811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:10.055973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:36:10.063178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:36:10.063382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:36:10.069843 update_engine[1439]: I20260314 00:36:10.069674 1439 main.cc:92] Flatcar Update Engine starting Mar 14 00:36:10.071324 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:36:10.071357 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:36:10.087205 jq[1459]: true Mar 14 00:36:10.087519 update_engine[1439]: I20260314 00:36:10.086632 1439 update_check_scheduler.cc:74] Next update check in 2m34s Mar 14 00:36:10.091212 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:36:10.092443 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:36:10.092484 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:36:10.093821 systemd-logind[1437]: New seat seat0. Mar 14 00:36:10.113952 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:36:10.100181 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:36:10.226385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1311) Mar 14 00:36:10.226490 tar[1449]: linux-amd64/LICENSE Mar 14 00:36:10.124279 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:36:10.197006 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:36:10.197239 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:36:10.205152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:36:10.230061 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:36:10.230061 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:36:10.230061 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:36:10.290272 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Mar 14 00:36:10.294851 tar[1449]: linux-amd64/helm Mar 14 00:36:10.248101 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:36:10.294977 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:36:10.248450 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:36:10.287184 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:36:10.313050 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:36:10.316399 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:36:10.316771 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:36:10.393122 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:36:10.442449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:36:10.455339 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:36:10.474795 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:36:10.476727 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:36:10.489715 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:36:10.505622 containerd[1451]: time="2026-03-14T00:36:10.503164975Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:36:10.513037 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:36:10.534087 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:36:10.545170 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:36:10.554166 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:36:10.583002 containerd[1451]: time="2026-03-14T00:36:10.582678125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.585959 containerd[1451]: time="2026-03-14T00:36:10.585907850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:10.586068 containerd[1451]: time="2026-03-14T00:36:10.586045535Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:36:10.586148 containerd[1451]: time="2026-03-14T00:36:10.586127233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:36:10.586487 containerd[1451]: time="2026-03-14T00:36:10.586460627Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:36:10.586652 containerd[1451]: time="2026-03-14T00:36:10.586557335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.586855 containerd[1451]: time="2026-03-14T00:36:10.586828796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:10.586989 containerd[1451]: time="2026-03-14T00:36:10.586969620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.587322 containerd[1451]: time="2026-03-14T00:36:10.587294655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.587461110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.587498178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.587517792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.587713057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.588098723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.588255081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.588276433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.588408877Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:36:10.588636 containerd[1451]: time="2026-03-14T00:36:10.588498116Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:36:10.603867 containerd[1451]: time="2026-03-14T00:36:10.603713196Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:36:10.604130 containerd[1451]: time="2026-03-14T00:36:10.604099769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:36:10.604427 containerd[1451]: time="2026-03-14T00:36:10.604403231Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:36:10.604518 containerd[1451]: time="2026-03-14T00:36:10.604499012Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:36:10.604744 containerd[1451]: time="2026-03-14T00:36:10.604688826Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:36:10.604942 containerd[1451]: time="2026-03-14T00:36:10.604889310Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:36:10.605422 containerd[1451]: time="2026-03-14T00:36:10.605263538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606090988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606177865Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606193017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606205706Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606217507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606228581Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606240685Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606253323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606265184Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606276420Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606287878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606305978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606317709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606332 containerd[1451]: time="2026-03-14T00:36:10.606328410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606339635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606351224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606362763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606373119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606387787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606398710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606411238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606421958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606431781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606442835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606455745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606474825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606485162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.606899 containerd[1451]: time="2026-03-14T00:36:10.606494602Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606553576Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606609885Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606624431Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606645016Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606661156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606676490Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606691179Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:36:10.608245 containerd[1451]: time="2026-03-14T00:36:10.606708088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:36:10.608475 containerd[1451]: time="2026-03-14T00:36:10.607073341Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:36:10.608475 containerd[1451]: time="2026-03-14T00:36:10.607226863Z" level=info msg="Connect containerd service" Mar 14 00:36:10.608475 containerd[1451]: time="2026-03-14T00:36:10.607286464Z" level=info msg="using legacy CRI server" Mar 14 00:36:10.608475 containerd[1451]: time="2026-03-14T00:36:10.607300919Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:36:10.608475 containerd[1451]: time="2026-03-14T00:36:10.607447114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609022602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609530727Z" level=info msg="Start subscribing containerd event" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609644133Z" level=info msg="Start recovering state" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609734058Z" level=info msg="Start event monitor" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609767341Z" level=info msg="Start snapshots syncer" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609784523Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:36:10.610362 containerd[1451]: time="2026-03-14T00:36:10.609941872Z" level=info msg="Start streaming server" Mar 14 00:36:10.610632 containerd[1451]: time="2026-03-14T00:36:10.610515674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:36:10.610817 containerd[1451]: time="2026-03-14T00:36:10.610651401Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:36:10.610817 containerd[1451]: time="2026-03-14T00:36:10.610743152Z" level=info msg="containerd successfully booted in 0.109921s" Mar 14 00:36:10.610952 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:36:11.131450 tar[1449]: linux-amd64/README.md Mar 14 00:36:11.160532 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:36:11.966818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:11.982976 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:36:11.994389 systemd[1]: Startup finished in 1.923s (kernel) + 7.955s (initrd) + 9.253s (userspace) = 19.132s. Mar 14 00:36:12.077008 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:12.899065 kubelet[1537]: E0314 00:36:12.898852 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:12.910740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:12.911098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:12.911671 systemd[1]: kubelet.service: Consumed 1.324s CPU time. Mar 14 00:36:19.522996 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:36:19.559695 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:47878.service - OpenSSH per-connection server daemon (10.0.0.1:47878). Mar 14 00:36:19.718165 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 47878 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:19.750384 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:19.787278 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:36:19.788148 systemd-logind[1437]: New session 1 of user core. Mar 14 00:36:19.803654 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:36:19.860910 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:36:19.878902 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:36:19.883358 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:36:20.092108 systemd[1555]: Queued start job for default target default.target. Mar 14 00:36:20.109885 systemd[1555]: Created slice app.slice - User Application Slice. Mar 14 00:36:20.109955 systemd[1555]: Reached target paths.target - Paths. Mar 14 00:36:20.109979 systemd[1555]: Reached target timers.target - Timers. Mar 14 00:36:20.113231 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:36:20.154969 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:36:20.156032 systemd[1555]: Reached target sockets.target - Sockets. Mar 14 00:36:20.156499 systemd[1555]: Reached target basic.target - Basic System. Mar 14 00:36:20.156680 systemd[1555]: Reached target default.target - Main User Target. Mar 14 00:36:20.156858 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:36:20.159029 systemd[1555]: Startup finished in 263ms. Mar 14 00:36:20.167330 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:36:20.313115 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:32930.service - OpenSSH per-connection server daemon (10.0.0.1:32930). Mar 14 00:36:20.405058 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 32930 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:20.411755 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:20.455686 systemd-logind[1437]: New session 2 of user core. Mar 14 00:36:20.467258 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:36:20.601642 sshd[1566]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:20.627237 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:32930.service: Deactivated successfully. Mar 14 00:36:20.651762 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:36:20.658734 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:36:20.675327 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:32932.service - OpenSSH per-connection server daemon (10.0.0.1:32932). Mar 14 00:36:20.680902 systemd-logind[1437]: Removed session 2. Mar 14 00:36:20.766851 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 32932 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:20.783432 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:20.808297 systemd-logind[1437]: New session 3 of user core. Mar 14 00:36:20.817833 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:36:20.952440 sshd[1573]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:20.981461 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:32932.service: Deactivated successfully. Mar 14 00:36:20.988347 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:36:20.996324 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:36:21.014152 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:32936.service - OpenSSH per-connection server daemon (10.0.0.1:32936). Mar 14 00:36:21.017204 systemd-logind[1437]: Removed session 3. Mar 14 00:36:21.125007 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 32936 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:21.130270 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:21.168287 systemd-logind[1437]: New session 4 of user core. Mar 14 00:36:21.176306 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:36:21.285117 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:21.350980 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:32942.service - OpenSSH per-connection server daemon (10.0.0.1:32942). Mar 14 00:36:21.353693 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:32936.service: Deactivated successfully. Mar 14 00:36:21.358940 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:36:21.364522 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:36:21.375013 systemd-logind[1437]: Removed session 4. Mar 14 00:36:21.472331 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 32942 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:21.476891 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:21.508326 systemd-logind[1437]: New session 5 of user core. Mar 14 00:36:21.534095 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:36:21.698903 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:36:21.701247 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:21.757686 sudo[1590]: pam_unix(sudo:session): session closed for user root Mar 14 00:36:21.760942 sshd[1585]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:21.777293 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:32942.service: Deactivated successfully. Mar 14 00:36:21.780975 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:36:21.789722 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:36:21.814277 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:32952.service - OpenSSH per-connection server daemon (10.0.0.1:32952). Mar 14 00:36:21.820908 systemd-logind[1437]: Removed session 5. Mar 14 00:36:21.886813 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 32952 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:21.892323 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:21.909365 systemd-logind[1437]: New session 6 of user core. Mar 14 00:36:21.923867 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:36:22.031933 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:36:22.033525 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:22.077443 sudo[1599]: pam_unix(sudo:session): session closed for user root Mar 14 00:36:22.089491 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:36:22.090027 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:22.124821 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:36:22.155495 auditctl[1602]: No rules Mar 14 00:36:22.162634 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:36:22.162976 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:36:22.176631 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:36:22.290053 augenrules[1620]: No rules Mar 14 00:36:22.294140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:36:22.297880 sudo[1598]: pam_unix(sudo:session): session closed for user root Mar 14 00:36:22.311780 sshd[1595]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:22.324162 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:32952.service: Deactivated successfully. Mar 14 00:36:22.333236 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:36:22.362021 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:36:22.381053 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:32968.service - OpenSSH per-connection server daemon (10.0.0.1:32968). Mar 14 00:36:22.384074 systemd-logind[1437]: Removed session 6. Mar 14 00:36:22.513100 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 32968 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:22.515282 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:22.540050 systemd-logind[1437]: New session 7 of user core. Mar 14 00:36:22.562936 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:36:22.628620 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:36:22.629085 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:23.170706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:36:23.197971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:23.554392 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:36:23.573312 (dockerd)[1652]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:36:23.599173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:23.617248 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:23.710670 kubelet[1658]: E0314 00:36:23.709659 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:23.718099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:23.718283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:25.415618 dockerd[1652]: time="2026-03-14T00:36:25.415446138Z" level=info msg="Starting up" Mar 14 00:36:25.915910 systemd[1]: var-lib-docker-metacopy\x2dcheck953446078-merged.mount: Deactivated successfully. Mar 14 00:36:25.968608 dockerd[1652]: time="2026-03-14T00:36:25.968277714Z" level=info msg="Loading containers: start." Mar 14 00:36:26.225631 kernel: Initializing XFRM netlink socket Mar 14 00:36:26.398307 systemd-networkd[1380]: docker0: Link UP Mar 14 00:36:26.432086 dockerd[1652]: time="2026-03-14T00:36:26.431901251Z" level=info msg="Loading containers: done." Mar 14 00:36:26.660525 dockerd[1652]: time="2026-03-14T00:36:26.660314430Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:36:26.660525 dockerd[1652]: time="2026-03-14T00:36:26.660489984Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:36:26.660897 dockerd[1652]: time="2026-03-14T00:36:26.660762287Z" level=info msg="Daemon has completed initialization" Mar 14 00:36:26.746202 dockerd[1652]: time="2026-03-14T00:36:26.745965097Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:36:26.746202 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:36:26.883968 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3014890063-merged.mount: Deactivated successfully. Mar 14 00:36:28.527255 containerd[1451]: time="2026-03-14T00:36:28.527086080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:36:29.637103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3830552179.mount: Deactivated successfully. Mar 14 00:36:33.383189 containerd[1451]: time="2026-03-14T00:36:33.383032927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:33.385342 containerd[1451]: time="2026-03-14T00:36:33.385204295Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 14 00:36:33.389297 containerd[1451]: time="2026-03-14T00:36:33.389216257Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:33.393772 containerd[1451]: time="2026-03-14T00:36:33.393652248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:33.396313 containerd[1451]: time="2026-03-14T00:36:33.396169724Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 4.869015104s" Mar 14 00:36:33.396313 containerd[1451]: time="2026-03-14T00:36:33.396287295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:36:33.399136 containerd[1451]: time="2026-03-14T00:36:33.399067924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:36:33.970423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:36:34.237066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:34.644262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:34.654724 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:35.104838 kubelet[1884]: E0314 00:36:35.104618 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:35.109362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:35.109636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:37.196328 containerd[1451]: time="2026-03-14T00:36:37.195060059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:37.199823 containerd[1451]: time="2026-03-14T00:36:37.198075932Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 14 00:36:37.201556 containerd[1451]: time="2026-03-14T00:36:37.201478323Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:37.208951 containerd[1451]: time="2026-03-14T00:36:37.208509548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:37.210012 containerd[1451]: time="2026-03-14T00:36:37.209773793Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 3.81064752s" Mar 14 00:36:37.210012 containerd[1451]: time="2026-03-14T00:36:37.209974589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:36:37.213983 containerd[1451]: time="2026-03-14T00:36:37.213391247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:36:42.483858 containerd[1451]: time="2026-03-14T00:36:42.481901235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:42.489496 containerd[1451]: time="2026-03-14T00:36:42.489163369Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 14 00:36:42.498059 containerd[1451]: time="2026-03-14T00:36:42.497891957Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:42.520015 containerd[1451]: time="2026-03-14T00:36:42.519470556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:42.529354 containerd[1451]: time="2026-03-14T00:36:42.527438871Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 5.31396804s" Mar 14 00:36:42.529354 containerd[1451]: time="2026-03-14T00:36:42.527520628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:36:42.533398 containerd[1451]: time="2026-03-14T00:36:42.533268151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:36:45.218709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:36:45.234045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:46.002908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:46.015870 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:46.025909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809825795.mount: Deactivated successfully. Mar 14 00:36:46.191126 kubelet[1912]: E0314 00:36:46.190995 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:46.196197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:46.196528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:47.024429 containerd[1451]: time="2026-03-14T00:36:47.023690653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:47.029757 containerd[1451]: time="2026-03-14T00:36:47.029061369Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 14 00:36:47.035770 containerd[1451]: time="2026-03-14T00:36:47.033164842Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:47.039948 containerd[1451]: time="2026-03-14T00:36:47.039798121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:47.042905 containerd[1451]: time="2026-03-14T00:36:47.041902431Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 4.508528075s" Mar 14 00:36:47.042905 containerd[1451]: time="2026-03-14T00:36:47.041991118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:36:47.049511 containerd[1451]: time="2026-03-14T00:36:47.048667284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:36:47.638240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513438914.mount: Deactivated successfully. Mar 14 00:36:51.532022 containerd[1451]: time="2026-03-14T00:36:51.531164771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:51.536804 containerd[1451]: time="2026-03-14T00:36:51.536363732Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 14 00:36:51.541176 containerd[1451]: time="2026-03-14T00:36:51.539536192Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:51.553868 containerd[1451]: time="2026-03-14T00:36:51.553168898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:51.558212 containerd[1451]: time="2026-03-14T00:36:51.557061764Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 4.508350795s" Mar 14 00:36:51.558212 containerd[1451]: time="2026-03-14T00:36:51.557136592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:36:51.561623 containerd[1451]: time="2026-03-14T00:36:51.561528755Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:36:52.286314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539193025.mount: Deactivated successfully. Mar 14 00:36:52.302841 containerd[1451]: time="2026-03-14T00:36:52.299919568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:52.302986 containerd[1451]: time="2026-03-14T00:36:52.302918267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:36:52.304508 containerd[1451]: time="2026-03-14T00:36:52.304381538Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:52.310076 containerd[1451]: time="2026-03-14T00:36:52.309635821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:52.310076 containerd[1451]: time="2026-03-14T00:36:52.309791364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 748.15717ms" Mar 14 00:36:52.310076 containerd[1451]: time="2026-03-14T00:36:52.310081706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:36:52.313212 containerd[1451]: time="2026-03-14T00:36:52.312417067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:36:53.083353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996099037.mount: Deactivated successfully. Mar 14 00:36:54.890095 update_engine[1439]: I20260314 00:36:54.889948 1439 update_attempter.cc:509] Updating boot flags... Mar 14 00:36:55.144218 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2045) Mar 14 00:36:55.360622 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2048) Mar 14 00:36:55.615049 containerd[1451]: time="2026-03-14T00:36:55.614761701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:55.616642 containerd[1451]: time="2026-03-14T00:36:55.616545519Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 14 00:36:55.627221 containerd[1451]: time="2026-03-14T00:36:55.626773151Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:55.634549 containerd[1451]: time="2026-03-14T00:36:55.634400530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:55.637492 containerd[1451]: time="2026-03-14T00:36:55.637085691Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.324609116s" Mar 14 00:36:55.637492 containerd[1451]: time="2026-03-14T00:36:55.637422959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:36:56.220257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:36:56.279078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:57.242313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:57.261268 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:57.426384 kubelet[2093]: E0314 00:36:57.419755 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:57.433905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:57.434369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:59.687612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:59.712128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:59.861989 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit session-7.scope)... Mar 14 00:36:59.862009 systemd[1]: Reloading... Mar 14 00:37:00.183493 zram_generator::config[2149]: No configuration found. Mar 14 00:37:00.708621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:37:00.981260 systemd[1]: Reloading finished in 1118 ms. Mar 14 00:37:01.274075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:01.287328 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:37:01.309630 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:01.310233 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:37:01.310681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:01.331619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:01.834935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:01.854185 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:37:02.229970 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:37:02.677059 kubelet[2203]: I0314 00:37:02.674358 2203 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:37:02.677059 kubelet[2203]: I0314 00:37:02.675049 2203 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:37:02.683710 kubelet[2203]: I0314 00:37:02.680689 2203 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:37:02.683710 kubelet[2203]: I0314 00:37:02.680708 2203 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:37:02.683710 kubelet[2203]: I0314 00:37:02.681450 2203 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:37:02.831029 kubelet[2203]: I0314 00:37:02.830482 2203 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:37:02.836022 kubelet[2203]: E0314 00:37:02.831319 2203 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:37:02.866628 kubelet[2203]: E0314 00:37:02.864678 2203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:37:02.866628 kubelet[2203]: I0314 00:37:02.864750 2203 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:37:02.892314 kubelet[2203]: I0314 00:37:02.892279 2203 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:37:02.908251 kubelet[2203]: I0314 00:37:02.907359 2203 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:37:02.908251 kubelet[2203]: I0314 00:37:02.907450 2203 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:37:02.908251 kubelet[2203]: I0314 00:37:02.907739 2203 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:37:02.908251 kubelet[2203]: I0314 00:37:02.907753 2203 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:37:02.908631 kubelet[2203]: I0314 00:37:02.907883 2203 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:37:02.921230 kubelet[2203]: I0314 00:37:02.917230 2203 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:37:02.921230 kubelet[2203]: I0314 00:37:02.917503 2203 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:37:02.921230 kubelet[2203]: I0314 00:37:02.917522 2203 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:37:02.921230 kubelet[2203]: I0314 00:37:02.917608 2203 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:37:02.921230 kubelet[2203]: I0314 00:37:02.917630 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:37:02.936539 kubelet[2203]: I0314 00:37:02.936366 2203 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:37:02.949014 kubelet[2203]: I0314 00:37:02.947783 2203 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:37:02.949014 kubelet[2203]: I0314 00:37:02.947844 2203 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:37:02.949014 kubelet[2203]: W0314 00:37:02.947981 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:37:02.970409 kubelet[2203]: I0314 00:37:02.962674 2203 server.go:1257] "Started kubelet" Mar 14 00:37:02.970409 kubelet[2203]: I0314 00:37:02.967160 2203 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:37:02.970409 kubelet[2203]: I0314 00:37:02.969386 2203 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:37:02.970409 kubelet[2203]: I0314 00:37:02.969931 2203 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:37:02.988039 kubelet[2203]: I0314 00:37:02.987422 2203 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:37:03.000118 kubelet[2203]: I0314 00:37:02.999720 2203 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:37:03.013649 kubelet[2203]: I0314 00:37:02.996044 2203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:37:03.013649 kubelet[2203]: I0314 00:37:02.989888 2203 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:37:03.013649 kubelet[2203]: I0314 00:37:03.005816 2203 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:37:03.013649 kubelet[2203]: E0314 00:37:03.006273 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:03.022351 kubelet[2203]: E0314 00:37:03.015470 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Mar 14 00:37:03.052495 kubelet[2203]: I0314 00:37:03.045030 2203 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:37:03.056144 kubelet[2203]: I0314 00:37:03.054231 2203 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:37:03.056144 kubelet[2203]: I0314 00:37:03.054599 2203 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:37:03.060071 kubelet[2203]: I0314 00:37:03.059168 2203 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:37:03.076346 kubelet[2203]: E0314 00:37:03.068425 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8e205f17852a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:37:02.962607402 +0000 UTC m=+1.085217102,LastTimestamp:2026-03-14 00:37:02.962607402 +0000 UTC m=+1.085217102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:37:03.078198 kubelet[2203]: E0314 00:37:03.076697 2203 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:37:03.102006 kubelet[2203]: I0314 00:37:03.101445 2203 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:37:03.108141 kubelet[2203]: E0314 00:37:03.108039 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:03.170777 kubelet[2203]: I0314 00:37:03.170681 2203 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:37:03.170777 kubelet[2203]: I0314 00:37:03.170717 2203 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:37:03.170777 kubelet[2203]: I0314 00:37:03.170737 2203 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:37:03.186095 kubelet[2203]: I0314 00:37:03.184882 2203 policy_none.go:50] "Start" Mar 14 00:37:03.186095 kubelet[2203]: I0314 00:37:03.185142 2203 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:37:03.186095 kubelet[2203]: I0314 00:37:03.185163 2203 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:37:03.189677 kubelet[2203]: I0314 00:37:03.188765 2203 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:37:03.203320 kubelet[2203]: I0314 00:37:03.203243 2203 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:37:03.203320 kubelet[2203]: I0314 00:37:03.203296 2203 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:37:03.203468 kubelet[2203]: I0314 00:37:03.203331 2203 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:37:03.203468 kubelet[2203]: E0314 00:37:03.203400 2203 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:37:03.209335 kubelet[2203]: E0314 00:37:03.209137 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:03.209496 kubelet[2203]: I0314 00:37:03.209480 2203 policy_none.go:44] "Start" Mar 14 00:37:03.216527 kubelet[2203]: E0314 00:37:03.216188 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Mar 14 00:37:03.246662 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:37:03.293228 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:37:03.305243 kubelet[2203]: E0314 00:37:03.305174 2203 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:37:03.311011 kubelet[2203]: E0314 00:37:03.310703 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:03.316622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:37:03.339961 kubelet[2203]: E0314 00:37:03.339865 2203 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:37:03.340273 kubelet[2203]: I0314 00:37:03.340220 2203 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:37:03.340331 kubelet[2203]: I0314 00:37:03.340266 2203 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:37:03.348698 kubelet[2203]: I0314 00:37:03.344115 2203 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:37:03.356426 kubelet[2203]: E0314 00:37:03.355801 2203 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:37:03.356426 kubelet[2203]: E0314 00:37:03.355862 2203 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:37:03.446832 kubelet[2203]: I0314 00:37:03.446548 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:37:03.447168 kubelet[2203]: E0314 00:37:03.447026 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 14 00:37:03.563219 kubelet[2203]: I0314 00:37:03.560520 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:03.563219 kubelet[2203]: I0314 00:37:03.560630 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:03.563219 kubelet[2203]: I0314 00:37:03.560669 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:03.563219 kubelet[2203]: I0314 00:37:03.560697 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:03.563219 kubelet[2203]: I0314 00:37:03.560734 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51cba516fcdc8dcd4680c76237951c27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cba516fcdc8dcd4680c76237951c27\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:03.563474 kubelet[2203]: I0314 00:37:03.560762 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51cba516fcdc8dcd4680c76237951c27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51cba516fcdc8dcd4680c76237951c27\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:03.563474 kubelet[2203]: I0314 00:37:03.560790 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:03.563474 kubelet[2203]: I0314 00:37:03.560814 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:03.563474 kubelet[2203]: I0314 00:37:03.560837 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51cba516fcdc8dcd4680c76237951c27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cba516fcdc8dcd4680c76237951c27\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:03.579241 systemd[1]: Created slice kubepods-burstable-pod51cba516fcdc8dcd4680c76237951c27.slice - libcontainer container kubepods-burstable-pod51cba516fcdc8dcd4680c76237951c27.slice. Mar 14 00:37:03.604069 kubelet[2203]: E0314 00:37:03.602621 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:03.618085 kubelet[2203]: E0314 00:37:03.617212 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Mar 14 00:37:03.635246 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 14 00:37:03.659327 kubelet[2203]: I0314 00:37:03.653695 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:37:03.659327 kubelet[2203]: E0314 00:37:03.658681 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 14 00:37:03.668692 kubelet[2203]: E0314 00:37:03.666552 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:03.686800 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 14 00:37:03.699107 kubelet[2203]: E0314 00:37:03.698928 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:03.726179 kubelet[2203]: E0314 00:37:03.725181 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:03.739644 containerd[1451]: time="2026-03-14T00:37:03.739017416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:03.922905 kubelet[2203]: E0314 00:37:03.919423 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:03.938919 containerd[1451]: time="2026-03-14T00:37:03.926763692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51cba516fcdc8dcd4680c76237951c27,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:03.979760 kubelet[2203]: E0314 00:37:03.979538 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:03.981952 containerd[1451]: time="2026-03-14T00:37:03.981912298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:04.074023 kubelet[2203]: I0314 00:37:04.073414 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:37:04.074023 kubelet[2203]: E0314 00:37:04.073819 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 14 00:37:04.422087 kubelet[2203]: E0314 00:37:04.421669 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Mar 14 00:37:04.532238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952098218.mount: Deactivated successfully. Mar 14 00:37:04.574323 containerd[1451]: time="2026-03-14T00:37:04.574268282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:04.587880 containerd[1451]: time="2026-03-14T00:37:04.586946433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:37:04.593612 containerd[1451]: time="2026-03-14T00:37:04.593432800Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:04.608107 containerd[1451]: time="2026-03-14T00:37:04.607885403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:37:04.612829 containerd[1451]: time="2026-03-14T00:37:04.611729922Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:04.615984 containerd[1451]: time="2026-03-14T00:37:04.615896712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:37:04.620784 containerd[1451]: time="2026-03-14T00:37:04.620672587Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:04.629504 containerd[1451]: time="2026-03-14T00:37:04.629317205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:04.640748 containerd[1451]: time="2026-03-14T00:37:04.640635358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 901.506879ms" Mar 14 00:37:04.643267 containerd[1451]: time="2026-03-14T00:37:04.643002334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.147627ms" Mar 14 00:37:04.654237 containerd[1451]: time="2026-03-14T00:37:04.654161532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.010994ms" Mar 14 00:37:04.881296 kubelet[2203]: I0314 00:37:04.881206 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:37:04.884670 kubelet[2203]: E0314 00:37:04.881872 2203 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 14 00:37:05.010468 kubelet[2203]: E0314 00:37:05.010317 2203 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:37:05.222132 containerd[1451]: time="2026-03-14T00:37:05.221120962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:05.222132 containerd[1451]: time="2026-03-14T00:37:05.221287383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:05.222132 containerd[1451]: time="2026-03-14T00:37:05.221354762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:05.222917 containerd[1451]: time="2026-03-14T00:37:05.222224494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:05.228717 containerd[1451]: time="2026-03-14T00:37:05.228370396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:05.228717 containerd[1451]: time="2026-03-14T00:37:05.228424380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:05.232746 containerd[1451]: time="2026-03-14T00:37:05.228879134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:05.232746 containerd[1451]: time="2026-03-14T00:37:05.229385908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:05.247318 containerd[1451]: time="2026-03-14T00:37:05.243396296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:05.247318 containerd[1451]: time="2026-03-14T00:37:05.243490978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:05.247318 containerd[1451]: time="2026-03-14T00:37:05.243507209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:05.247318 containerd[1451]: time="2026-03-14T00:37:05.243926675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:05.269902 systemd[1]: Started cri-containerd-ca53a3a586b9fcc0e8bc1b2568dd3b4f77e207f20e82433e032c1b5bb24bbdea.scope - libcontainer container ca53a3a586b9fcc0e8bc1b2568dd3b4f77e207f20e82433e032c1b5bb24bbdea. Mar 14 00:37:05.312473 systemd[1]: Started cri-containerd-7fc73cf5d5b750d65b57160e036660fd3cfe6a55328327f55238ffafa9f4e92a.scope - libcontainer container 7fc73cf5d5b750d65b57160e036660fd3cfe6a55328327f55238ffafa9f4e92a. Mar 14 00:37:05.337105 systemd[1]: Started cri-containerd-584fc64f65590be287558f11730db2dfe20767b0d01268544205f4953a13f80d.scope - libcontainer container 584fc64f65590be287558f11730db2dfe20767b0d01268544205f4953a13f80d. Mar 14 00:37:05.538228 containerd[1451]: time="2026-03-14T00:37:05.537927912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca53a3a586b9fcc0e8bc1b2568dd3b4f77e207f20e82433e032c1b5bb24bbdea\"" Mar 14 00:37:05.542674 kubelet[2203]: E0314 00:37:05.541713 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:05.580237 containerd[1451]: time="2026-03-14T00:37:05.579891871Z" level=info msg="CreateContainer within sandbox \"ca53a3a586b9fcc0e8bc1b2568dd3b4f77e207f20e82433e032c1b5bb24bbdea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:37:05.586537 containerd[1451]: time="2026-03-14T00:37:05.583770407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51cba516fcdc8dcd4680c76237951c27,Namespace:kube-system,Attempt:0,} returns sandbox id \"584fc64f65590be287558f11730db2dfe20767b0d01268544205f4953a13f80d\"" Mar 14 00:37:05.588888 kubelet[2203]: E0314 00:37:05.588860 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:05.609177 containerd[1451]: time="2026-03-14T00:37:05.608023790Z" level=info msg="CreateContainer within sandbox \"584fc64f65590be287558f11730db2dfe20767b0d01268544205f4953a13f80d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:37:05.654412 containerd[1451]: time="2026-03-14T00:37:05.654242815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc73cf5d5b750d65b57160e036660fd3cfe6a55328327f55238ffafa9f4e92a\"" Mar 14 00:37:05.655881 kubelet[2203]: E0314 00:37:05.655777 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:05.717115 containerd[1451]: time="2026-03-14T00:37:05.716852945Z" level=info msg="CreateContainer within sandbox \"7fc73cf5d5b750d65b57160e036660fd3cfe6a55328327f55238ffafa9f4e92a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:37:05.726332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149811074.mount: Deactivated successfully. Mar 14 00:37:05.730832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047799325.mount: Deactivated successfully. Mar 14 00:37:05.741059 containerd[1451]: time="2026-03-14T00:37:05.740960854Z" level=info msg="CreateContainer within sandbox \"ca53a3a586b9fcc0e8bc1b2568dd3b4f77e207f20e82433e032c1b5bb24bbdea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"77b066850836ad36a6ddc28d97631abe80f24b271662aa4c352c19e3c3aeda3e\"" Mar 14 00:37:05.743539 containerd[1451]: time="2026-03-14T00:37:05.742276312Z" level=info msg="StartContainer for \"77b066850836ad36a6ddc28d97631abe80f24b271662aa4c352c19e3c3aeda3e\"" Mar 14 00:37:05.773363 containerd[1451]: time="2026-03-14T00:37:05.773251780Z" level=info msg="CreateContainer within sandbox \"584fc64f65590be287558f11730db2dfe20767b0d01268544205f4953a13f80d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c7c9950d6384a08cbd4df80816a59e524ae1f3b2082a75ec9305deee72bc9f8\"" Mar 14 00:37:05.774510 containerd[1451]: time="2026-03-14T00:37:05.774430721Z" level=info msg="StartContainer for \"5c7c9950d6384a08cbd4df80816a59e524ae1f3b2082a75ec9305deee72bc9f8\"" Mar 14 00:37:05.775341 containerd[1451]: time="2026-03-14T00:37:05.775289773Z" level=info msg="CreateContainer within sandbox \"7fc73cf5d5b750d65b57160e036660fd3cfe6a55328327f55238ffafa9f4e92a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67d8c119668bca820cb514f16ffbc230afc76ea0799f83f3213aeed0540d0e02\"" Mar 14 00:37:05.775816 containerd[1451]: time="2026-03-14T00:37:05.775772733Z" level=info msg="StartContainer for \"67d8c119668bca820cb514f16ffbc230afc76ea0799f83f3213aeed0540d0e02\"" Mar 14 00:37:05.813711 systemd[1]: Started cri-containerd-77b066850836ad36a6ddc28d97631abe80f24b271662aa4c352c19e3c3aeda3e.scope - libcontainer container 77b066850836ad36a6ddc28d97631abe80f24b271662aa4c352c19e3c3aeda3e. Mar 14 00:37:05.857803 systemd[1]: Started cri-containerd-67d8c119668bca820cb514f16ffbc230afc76ea0799f83f3213aeed0540d0e02.scope - libcontainer container 67d8c119668bca820cb514f16ffbc230afc76ea0799f83f3213aeed0540d0e02. Mar 14 00:37:05.869150 systemd[1]: Started cri-containerd-5c7c9950d6384a08cbd4df80816a59e524ae1f3b2082a75ec9305deee72bc9f8.scope - libcontainer container 5c7c9950d6384a08cbd4df80816a59e524ae1f3b2082a75ec9305deee72bc9f8. Mar 14 00:37:06.026181 kubelet[2203]: E0314 00:37:06.026021 2203 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="3.2s" Mar 14 00:37:06.029623 containerd[1451]: time="2026-03-14T00:37:06.028401285Z" level=info msg="StartContainer for \"77b066850836ad36a6ddc28d97631abe80f24b271662aa4c352c19e3c3aeda3e\" returns successfully" Mar 14 00:37:06.029623 containerd[1451]: time="2026-03-14T00:37:06.028969758Z" level=info msg="StartContainer for \"5c7c9950d6384a08cbd4df80816a59e524ae1f3b2082a75ec9305deee72bc9f8\" returns successfully" Mar 14 00:37:06.054061 containerd[1451]: time="2026-03-14T00:37:06.053983164Z" level=info msg="StartContainer for \"67d8c119668bca820cb514f16ffbc230afc76ea0799f83f3213aeed0540d0e02\" returns successfully" Mar 14 00:37:06.233329 kubelet[2203]: E0314 00:37:06.233255 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:06.233505 kubelet[2203]: E0314 00:37:06.233484 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:06.241219 kubelet[2203]: E0314 00:37:06.241156 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:06.241404 kubelet[2203]: E0314 00:37:06.241348 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:06.242772 kubelet[2203]: E0314 00:37:06.242696 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:06.243295 kubelet[2203]: E0314 00:37:06.243097 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:06.497814 kubelet[2203]: I0314 00:37:06.495421 2203 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:37:06.511085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752325447.mount: Deactivated successfully. Mar 14 00:37:07.249050 kubelet[2203]: E0314 00:37:07.248616 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:07.249050 kubelet[2203]: E0314 00:37:07.248859 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:07.250620 kubelet[2203]: E0314 00:37:07.250511 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:07.250716 kubelet[2203]: E0314 00:37:07.250690 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:08.303503 kubelet[2203]: E0314 00:37:08.302412 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:08.303503 kubelet[2203]: E0314 00:37:08.302655 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:08.489826 kubelet[2203]: E0314 00:37:08.489720 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:08.490728 kubelet[2203]: E0314 00:37:08.490310 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:09.079440 kubelet[2203]: I0314 00:37:09.079288 2203 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:37:09.079440 kubelet[2203]: E0314 00:37:09.079355 2203 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 14 00:37:09.107290 kubelet[2203]: E0314 00:37:09.107177 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.215451 kubelet[2203]: E0314 00:37:09.215212 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.320104 kubelet[2203]: E0314 00:37:09.320031 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.424373 kubelet[2203]: E0314 00:37:09.423518 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.531856 kubelet[2203]: E0314 00:37:09.528723 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.635095 kubelet[2203]: E0314 00:37:09.634884 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.735768 kubelet[2203]: E0314 00:37:09.735549 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.761348 kubelet[2203]: E0314 00:37:09.760701 2203 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:09.761348 kubelet[2203]: E0314 00:37:09.761005 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:09.836844 kubelet[2203]: E0314 00:37:09.836719 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:09.937721 kubelet[2203]: E0314 00:37:09.937116 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.038073 kubelet[2203]: E0314 00:37:10.037830 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.138712 kubelet[2203]: E0314 00:37:10.138546 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.239867 kubelet[2203]: E0314 00:37:10.239480 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.340893 kubelet[2203]: E0314 00:37:10.340660 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.441018 kubelet[2203]: E0314 00:37:10.440845 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.541656 kubelet[2203]: E0314 00:37:10.541547 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.643535 kubelet[2203]: E0314 00:37:10.643240 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.743799 kubelet[2203]: E0314 00:37:10.743661 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.849131 kubelet[2203]: E0314 00:37:10.847668 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:10.954477 kubelet[2203]: E0314 00:37:10.953930 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.054804 kubelet[2203]: E0314 00:37:11.054726 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.156690 kubelet[2203]: E0314 00:37:11.156192 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.257711 kubelet[2203]: E0314 00:37:11.256450 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.357067 kubelet[2203]: E0314 00:37:11.356802 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.460296 kubelet[2203]: E0314 00:37:11.458488 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.560325 kubelet[2203]: E0314 00:37:11.559938 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.660997 kubelet[2203]: E0314 00:37:11.660959 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.762380 kubelet[2203]: E0314 00:37:11.762203 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:11.862898 kubelet[2203]: E0314 00:37:11.862708 2203 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:12.015697 kubelet[2203]: I0314 00:37:12.011057 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:12.067966 kubelet[2203]: I0314 00:37:12.067318 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:12.092092 kubelet[2203]: I0314 00:37:12.091861 2203 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:12.943953 kubelet[2203]: I0314 00:37:12.938708 2203 apiserver.go:52] "Watching apiserver" Mar 14 00:37:12.959669 kubelet[2203]: E0314 00:37:12.955943 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:12.959669 kubelet[2203]: E0314 00:37:12.959325 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:12.959669 kubelet[2203]: E0314 00:37:12.959522 2203 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:13.062924 kubelet[2203]: I0314 00:37:13.062825 2203 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:37:13.221648 systemd[1]: Reloading requested from client PID 2497 ('systemctl') (unit session-7.scope)... Mar 14 00:37:13.221666 systemd[1]: Reloading... Mar 14 00:37:13.306681 kubelet[2203]: I0314 00:37:13.294363 2203 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.294348789 podStartE2EDuration="1.294348789s" podCreationTimestamp="2026-03-14 00:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:37:13.293985937 +0000 UTC m=+11.416595647" watchObservedRunningTime="2026-03-14 00:37:13.294348789 +0000 UTC m=+11.416958509" Mar 14 00:37:13.362092 kubelet[2203]: I0314 00:37:13.361947 2203 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.361930044 podStartE2EDuration="1.361930044s" podCreationTimestamp="2026-03-14 00:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:37:13.357115795 +0000 UTC m=+11.479725485" watchObservedRunningTime="2026-03-14 00:37:13.361930044 +0000 UTC m=+11.484539774" Mar 14 00:37:13.420631 zram_generator::config[2538]: No configuration found. Mar 14 00:37:13.593865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:37:13.740254 systemd[1]: Reloading finished in 517 ms. Mar 14 00:37:13.834438 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:13.875213 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:37:13.875851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:13.875944 systemd[1]: kubelet.service: Consumed 2.308s CPU time, 127.5M memory peak, 0B memory swap peak. Mar 14 00:37:13.897546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:14.346600 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:37:14.348898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:14.524312 kubelet[2583]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:37:14.561381 kubelet[2583]: I0314 00:37:14.560827 2583 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:37:14.561381 kubelet[2583]: I0314 00:37:14.560895 2583 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:37:14.561381 kubelet[2583]: I0314 00:37:14.560922 2583 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:37:14.561381 kubelet[2583]: I0314 00:37:14.560931 2583 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:37:14.562407 kubelet[2583]: I0314 00:37:14.561885 2583 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:37:14.564785 kubelet[2583]: I0314 00:37:14.564171 2583 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:37:14.569049 kubelet[2583]: I0314 00:37:14.568974 2583 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:37:14.579951 kubelet[2583]: E0314 00:37:14.578704 2583 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:37:14.579951 kubelet[2583]: I0314 00:37:14.578780 2583 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:37:14.597517 kubelet[2583]: I0314 00:37:14.594872 2583 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:37:14.597517 kubelet[2583]: I0314 00:37:14.595169 2583 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:37:14.597517 kubelet[2583]: I0314 00:37:14.595198 2583 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:37:14.597517 kubelet[2583]: I0314 00:37:14.595400 2583 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.595414 2583 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.595465 2583 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.595859 2583 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.596057 2583 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.596087 2583 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.596141 2583 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:37:14.597927 kubelet[2583]: I0314 00:37:14.596164 2583 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:37:14.608696 kubelet[2583]: I0314 00:37:14.607914 2583 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:37:14.621506 kubelet[2583]: I0314 00:37:14.616841 2583 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:37:14.621506 kubelet[2583]: I0314 00:37:14.616891 2583 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:37:14.640294 kubelet[2583]: I0314 00:37:14.638371 2583 server.go:1257] "Started kubelet" Mar 14 00:37:14.640294 kubelet[2583]: I0314 00:37:14.639686 2583 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:37:14.645152 kubelet[2583]: I0314 00:37:14.642472 2583 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:37:14.645152 kubelet[2583]: I0314 00:37:14.643737 2583 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:37:14.651752 kubelet[2583]: I0314 00:37:14.651390 2583 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:37:14.665195 kubelet[2583]: I0314 00:37:14.657227 2583 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:37:14.671148 kubelet[2583]: I0314 00:37:14.668975 2583 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:37:14.681344 kubelet[2583]: I0314 00:37:14.673409 2583 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:37:14.688740 kubelet[2583]: I0314 00:37:14.683383 2583 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:37:14.688740 kubelet[2583]: I0314 00:37:14.687877 2583 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:37:14.688740 kubelet[2583]: I0314 00:37:14.687912 2583 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:37:14.688740 kubelet[2583]: I0314 00:37:14.688383 2583 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:37:14.699258 kubelet[2583]: I0314 00:37:14.697908 2583 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:37:14.699258 kubelet[2583]: I0314 00:37:14.697930 2583 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:37:14.707829 kubelet[2583]: E0314 00:37:14.706128 2583 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:37:14.785522 kubelet[2583]: I0314 00:37:14.785417 2583 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:37:14.799138 kubelet[2583]: I0314 00:37:14.797059 2583 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:37:14.799138 kubelet[2583]: I0314 00:37:14.797183 2583 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:37:14.799138 kubelet[2583]: I0314 00:37:14.797212 2583 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:37:14.799138 kubelet[2583]: E0314 00:37:14.797323 2583 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.886953 2583 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.886973 2583 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.886994 2583 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887142 2583 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887156 2583 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887175 2583 policy_none.go:50] "Start" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887185 2583 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887198 2583 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887331 2583 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:37:14.890448 kubelet[2583]: I0314 00:37:14.887341 2583 policy_none.go:44] "Start" Mar 14 00:37:14.898479 kubelet[2583]: E0314 00:37:14.898086 2583 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:37:14.913526 kubelet[2583]: E0314 00:37:14.912718 2583 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:37:14.913526 kubelet[2583]: I0314 00:37:14.912902 2583 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:37:14.913526 kubelet[2583]: I0314 00:37:14.912915 2583 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:37:14.913526 kubelet[2583]: I0314 00:37:14.913440 2583 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:37:14.925806 kubelet[2583]: E0314 00:37:14.925786 2583 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:37:15.052818 kubelet[2583]: I0314 00:37:15.052748 2583 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:37:15.104817 kubelet[2583]: I0314 00:37:15.104225 2583 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:15.105182 kubelet[2583]: I0314 00:37:15.105161 2583 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.106400 kubelet[2583]: I0314 00:37:15.106055 2583 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.112517 kubelet[2583]: I0314 00:37:15.112448 2583 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 14 00:37:15.112707 kubelet[2583]: I0314 00:37:15.112656 2583 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:37:15.194820 kubelet[2583]: I0314 00:37:15.193230 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51cba516fcdc8dcd4680c76237951c27-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cba516fcdc8dcd4680c76237951c27\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.194820 kubelet[2583]: I0314 00:37:15.193310 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.194820 kubelet[2583]: I0314 00:37:15.193336 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.194820 kubelet[2583]: I0314 00:37:15.193356 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51cba516fcdc8dcd4680c76237951c27-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cba516fcdc8dcd4680c76237951c27\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.200808 kubelet[2583]: I0314 00:37:15.199794 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51cba516fcdc8dcd4680c76237951c27-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51cba516fcdc8dcd4680c76237951c27\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.200808 kubelet[2583]: I0314 00:37:15.199865 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.200808 kubelet[2583]: I0314 00:37:15.199888 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.200808 kubelet[2583]: I0314 00:37:15.199910 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.200808 kubelet[2583]: I0314 00:37:15.199934 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:15.200808 kubelet[2583]: E0314 00:37:15.200313 2583 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:15.201108 kubelet[2583]: E0314 00:37:15.200704 2583 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:15.213817 kubelet[2583]: E0314 00:37:15.213780 2583 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.504124 kubelet[2583]: E0314 00:37:15.501340 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:15.504124 kubelet[2583]: E0314 00:37:15.503451 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:15.516930 kubelet[2583]: E0314 00:37:15.516776 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:15.604346 kubelet[2583]: I0314 00:37:15.600761 2583 apiserver.go:52] "Watching apiserver" Mar 14 00:37:15.693756 kubelet[2583]: I0314 00:37:15.691219 2583 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:37:15.874793 kubelet[2583]: E0314 00:37:15.872857 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:15.874793 kubelet[2583]: I0314 00:37:15.873465 2583 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.874793 kubelet[2583]: E0314 00:37:15.874021 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:15.922544 kubelet[2583]: E0314 00:37:15.922445 2583 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:15.924783 kubelet[2583]: E0314 00:37:15.924307 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:16.877139 kubelet[2583]: E0314 00:37:16.876809 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:16.881474 kubelet[2583]: E0314 00:37:16.881404 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:18.781510 kubelet[2583]: I0314 00:37:18.781341 2583 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:37:18.791657 containerd[1451]: time="2026-03-14T00:37:18.783984410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:37:18.796437 kubelet[2583]: I0314 00:37:18.795912 2583 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:37:19.524243 systemd[1]: Created slice kubepods-besteffort-pod43c63d4a_f6a7_4f9f_9469_a6d6dffb01b7.slice - libcontainer container kubepods-besteffort-pod43c63d4a_f6a7_4f9f_9469_a6d6dffb01b7.slice. Mar 14 00:37:19.591670 kubelet[2583]: I0314 00:37:19.591482 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7-kube-proxy\") pod \"kube-proxy-7q6lj\" (UID: \"43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7\") " pod="kube-system/kube-proxy-7q6lj" Mar 14 00:37:19.591670 kubelet[2583]: I0314 00:37:19.591656 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk48g\" (UniqueName: \"kubernetes.io/projected/43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7-kube-api-access-dk48g\") pod \"kube-proxy-7q6lj\" (UID: \"43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7\") " pod="kube-system/kube-proxy-7q6lj" Mar 14 00:37:19.592327 kubelet[2583]: I0314 00:37:19.591712 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7-xtables-lock\") pod \"kube-proxy-7q6lj\" (UID: \"43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7\") " pod="kube-system/kube-proxy-7q6lj" Mar 14 00:37:19.592327 kubelet[2583]: I0314 00:37:19.591811 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7-lib-modules\") pod \"kube-proxy-7q6lj\" (UID: \"43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7\") " pod="kube-system/kube-proxy-7q6lj" Mar 14 00:37:19.713189 systemd[1]: Created slice kubepods-besteffort-pod95c0e8ad_3278_4354_8c1e_6cb7c9289717.slice - libcontainer container kubepods-besteffort-pod95c0e8ad_3278_4354_8c1e_6cb7c9289717.slice. Mar 14 00:37:19.795104 kubelet[2583]: I0314 00:37:19.794171 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95c0e8ad-3278-4354-8c1e-6cb7c9289717-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-tz9gw\" (UID: \"95c0e8ad-3278-4354-8c1e-6cb7c9289717\") " pod="tigera-operator/tigera-operator-6cf4cccc57-tz9gw" Mar 14 00:37:19.795104 kubelet[2583]: I0314 00:37:19.794375 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwjr\" (UniqueName: \"kubernetes.io/projected/95c0e8ad-3278-4354-8c1e-6cb7c9289717-kube-api-access-gzwjr\") pod \"tigera-operator-6cf4cccc57-tz9gw\" (UID: \"95c0e8ad-3278-4354-8c1e-6cb7c9289717\") " pod="tigera-operator/tigera-operator-6cf4cccc57-tz9gw" Mar 14 00:37:19.843452 kubelet[2583]: E0314 00:37:19.843270 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:19.844688 containerd[1451]: time="2026-03-14T00:37:19.844640146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7q6lj,Uid:43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:19.980868 containerd[1451]: time="2026-03-14T00:37:19.979993917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:19.980868 containerd[1451]: time="2026-03-14T00:37:19.980347148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:19.980868 containerd[1451]: time="2026-03-14T00:37:19.980371815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:19.980868 containerd[1451]: time="2026-03-14T00:37:19.980518845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:20.023310 containerd[1451]: time="2026-03-14T00:37:20.023228772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-tz9gw,Uid:95c0e8ad-3278-4354-8c1e-6cb7c9289717,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:37:20.023862 systemd[1]: Started cri-containerd-71ffc4f8a60826254ac2d1c1e9f2d907010553ed7d1873ffc85ae8fd9825f6bc.scope - libcontainer container 71ffc4f8a60826254ac2d1c1e9f2d907010553ed7d1873ffc85ae8fd9825f6bc. Mar 14 00:37:20.074273 containerd[1451]: time="2026-03-14T00:37:20.073445730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7q6lj,Uid:43c63d4a-f6a7-4f9f-9469-a6d6dffb01b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ffc4f8a60826254ac2d1c1e9f2d907010553ed7d1873ffc85ae8fd9825f6bc\"" Mar 14 00:37:20.074743 containerd[1451]: time="2026-03-14T00:37:20.074364383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:20.074743 containerd[1451]: time="2026-03-14T00:37:20.074443112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:20.074743 containerd[1451]: time="2026-03-14T00:37:20.074463381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:20.075157 containerd[1451]: time="2026-03-14T00:37:20.074815650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:20.075502 kubelet[2583]: E0314 00:37:20.075349 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:20.084219 containerd[1451]: time="2026-03-14T00:37:20.084125349Z" level=info msg="CreateContainer within sandbox \"71ffc4f8a60826254ac2d1c1e9f2d907010553ed7d1873ffc85ae8fd9825f6bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:37:20.120586 containerd[1451]: time="2026-03-14T00:37:20.120375138Z" level=info msg="CreateContainer within sandbox \"71ffc4f8a60826254ac2d1c1e9f2d907010553ed7d1873ffc85ae8fd9825f6bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb883e1cedbcc49ea0f4fdb77cdd8b23804843ece3d841751eb5dd660331390c\"" Mar 14 00:37:20.121036 systemd[1]: Started cri-containerd-bf827212f9f3805819bd10335ffe44991a58e075867f591c7ab9951dc2e8c225.scope - libcontainer container bf827212f9f3805819bd10335ffe44991a58e075867f591c7ab9951dc2e8c225. Mar 14 00:37:20.124656 containerd[1451]: time="2026-03-14T00:37:20.122625410Z" level=info msg="StartContainer for \"eb883e1cedbcc49ea0f4fdb77cdd8b23804843ece3d841751eb5dd660331390c\"" Mar 14 00:37:20.172858 systemd[1]: Started cri-containerd-eb883e1cedbcc49ea0f4fdb77cdd8b23804843ece3d841751eb5dd660331390c.scope - libcontainer container eb883e1cedbcc49ea0f4fdb77cdd8b23804843ece3d841751eb5dd660331390c. Mar 14 00:37:20.190263 containerd[1451]: time="2026-03-14T00:37:20.190118620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-tz9gw,Uid:95c0e8ad-3278-4354-8c1e-6cb7c9289717,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bf827212f9f3805819bd10335ffe44991a58e075867f591c7ab9951dc2e8c225\"" Mar 14 00:37:20.194780 containerd[1451]: time="2026-03-14T00:37:20.194494824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:37:20.232446 containerd[1451]: time="2026-03-14T00:37:20.232332816Z" level=info msg="StartContainer for \"eb883e1cedbcc49ea0f4fdb77cdd8b23804843ece3d841751eb5dd660331390c\" returns successfully" Mar 14 00:37:20.427335 kubelet[2583]: E0314 00:37:20.427115 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:20.876204 kubelet[2583]: E0314 00:37:20.874047 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:20.969687 kubelet[2583]: E0314 00:37:20.969392 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:21.108075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131758971.mount: Deactivated successfully. Mar 14 00:37:22.553807 containerd[1451]: time="2026-03-14T00:37:22.553697794Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:22.555269 containerd[1451]: time="2026-03-14T00:37:22.555188652Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:37:22.556816 containerd[1451]: time="2026-03-14T00:37:22.556734278Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:22.561246 containerd[1451]: time="2026-03-14T00:37:22.561153839Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:22.562497 containerd[1451]: time="2026-03-14T00:37:22.562379897Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.367824998s" Mar 14 00:37:22.562497 containerd[1451]: time="2026-03-14T00:37:22.562449750Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:37:22.572344 containerd[1451]: time="2026-03-14T00:37:22.572266028Z" level=info msg="CreateContainer within sandbox \"bf827212f9f3805819bd10335ffe44991a58e075867f591c7ab9951dc2e8c225\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:37:22.602997 containerd[1451]: time="2026-03-14T00:37:22.602666878Z" level=info msg="CreateContainer within sandbox \"bf827212f9f3805819bd10335ffe44991a58e075867f591c7ab9951dc2e8c225\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3b9f27e78140226c3df9b69eff766655e5831dd7f9a7187edbb445b5d7dd7f18\"" Mar 14 00:37:22.605547 containerd[1451]: time="2026-03-14T00:37:22.604619918Z" level=info msg="StartContainer for \"3b9f27e78140226c3df9b69eff766655e5831dd7f9a7187edbb445b5d7dd7f18\"" Mar 14 00:37:22.691931 systemd[1]: Started cri-containerd-3b9f27e78140226c3df9b69eff766655e5831dd7f9a7187edbb445b5d7dd7f18.scope - libcontainer container 3b9f27e78140226c3df9b69eff766655e5831dd7f9a7187edbb445b5d7dd7f18. Mar 14 00:37:22.769304 containerd[1451]: time="2026-03-14T00:37:22.768111005Z" level=info msg="StartContainer for \"3b9f27e78140226c3df9b69eff766655e5831dd7f9a7187edbb445b5d7dd7f18\" returns successfully" Mar 14 00:37:22.993721 kubelet[2583]: I0314 00:37:22.992515 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-7q6lj" podStartSLOduration=3.9924980960000003 podStartE2EDuration="3.992498096s" podCreationTimestamp="2026-03-14 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:37:20.993146016 +0000 UTC m=+6.633233123" watchObservedRunningTime="2026-03-14 00:37:22.992498096 +0000 UTC m=+8.632585192" Mar 14 00:37:22.993721 kubelet[2583]: I0314 00:37:22.992695 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-tz9gw" podStartSLOduration=1.62288513 podStartE2EDuration="3.992687636s" podCreationTimestamp="2026-03-14 00:37:19 +0000 UTC" firstStartedPulling="2026-03-14 00:37:20.193896948 +0000 UTC m=+5.833984044" lastFinishedPulling="2026-03-14 00:37:22.563699463 +0000 UTC m=+8.203786550" observedRunningTime="2026-03-14 00:37:22.992362859 +0000 UTC m=+8.632449966" watchObservedRunningTime="2026-03-14 00:37:22.992687636 +0000 UTC m=+8.632774742" Mar 14 00:37:24.834024 kubelet[2583]: E0314 00:37:24.833977 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:30.454691 kubelet[2583]: E0314 00:37:30.452513 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:30.884436 kubelet[2583]: E0314 00:37:30.884077 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:30.900435 sudo[1631]: pam_unix(sudo:session): session closed for user root Mar 14 00:37:30.907018 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 14 00:37:30.916743 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:32968.service: Deactivated successfully. Mar 14 00:37:30.923984 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:37:30.924265 systemd[1]: session-7.scope: Consumed 8.052s CPU time, 159.6M memory peak, 0B memory swap peak. Mar 14 00:37:30.932709 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:37:31.012205 systemd-logind[1437]: Removed session 7. Mar 14 00:37:34.858150 kubelet[2583]: E0314 00:37:34.853676 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:35.533541 systemd[1]: Created slice kubepods-besteffort-poddd1db0a4_3ca6_4c13_ab5e_a62c7fc81702.slice - libcontainer container kubepods-besteffort-poddd1db0a4_3ca6_4c13_ab5e_a62c7fc81702.slice. Mar 14 00:37:35.594983 kubelet[2583]: I0314 00:37:35.594124 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702-typha-certs\") pod \"calico-typha-76b9b979b-54z4p\" (UID: \"dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702\") " pod="calico-system/calico-typha-76b9b979b-54z4p" Mar 14 00:37:35.594983 kubelet[2583]: I0314 00:37:35.594211 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702-tigera-ca-bundle\") pod \"calico-typha-76b9b979b-54z4p\" (UID: \"dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702\") " pod="calico-system/calico-typha-76b9b979b-54z4p" Mar 14 00:37:35.594983 kubelet[2583]: I0314 00:37:35.594248 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzzx\" (UniqueName: \"kubernetes.io/projected/dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702-kube-api-access-ltzzx\") pod \"calico-typha-76b9b979b-54z4p\" (UID: \"dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702\") " pod="calico-system/calico-typha-76b9b979b-54z4p" Mar 14 00:37:35.828212 systemd[1]: Created slice kubepods-besteffort-pod2cf7be10_049d_425b_aabf_aebc5ad677b3.slice - libcontainer container kubepods-besteffort-pod2cf7be10_049d_425b_aabf_aebc5ad677b3.slice. Mar 14 00:37:35.864003 kubelet[2583]: E0314 00:37:35.861936 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:35.865131 containerd[1451]: time="2026-03-14T00:37:35.865006654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b9b979b-54z4p,Uid:dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702,Namespace:calico-system,Attempt:0,}" Mar 14 00:37:35.901780 kubelet[2583]: I0314 00:37:35.900199 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-cni-bin-dir\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.901780 kubelet[2583]: I0314 00:37:35.900383 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-cni-net-dir\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.901780 kubelet[2583]: I0314 00:37:35.900516 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-xtables-lock\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.901780 kubelet[2583]: I0314 00:37:35.900544 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-flexvol-driver-host\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.901780 kubelet[2583]: I0314 00:37:35.900691 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-policysync\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.903223 kubelet[2583]: I0314 00:37:35.900715 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-nodeproc\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.903223 kubelet[2583]: I0314 00:37:35.900801 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-sys-fs\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.903223 kubelet[2583]: I0314 00:37:35.900828 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2cf7be10-049d-425b-aabf-aebc5ad677b3-node-certs\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.903223 kubelet[2583]: I0314 00:37:35.900853 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-cni-log-dir\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.903223 kubelet[2583]: I0314 00:37:35.900871 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-var-lib-calico\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.930176 kubelet[2583]: I0314 00:37:35.900891 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-var-run-calico\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.979729 kubelet[2583]: I0314 00:37:35.978857 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf7be10-049d-425b-aabf-aebc5ad677b3-tigera-ca-bundle\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.979729 kubelet[2583]: I0314 00:37:35.979218 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-lib-modules\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.979729 kubelet[2583]: I0314 00:37:35.979259 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6cl4\" (UniqueName: \"kubernetes.io/projected/2cf7be10-049d-425b-aabf-aebc5ad677b3-kube-api-access-r6cl4\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:35.979729 kubelet[2583]: I0314 00:37:35.979347 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2cf7be10-049d-425b-aabf-aebc5ad677b3-bpffs\") pod \"calico-node-gttsx\" (UID: \"2cf7be10-049d-425b-aabf-aebc5ad677b3\") " pod="calico-system/calico-node-gttsx" Mar 14 00:37:36.091674 kubelet[2583]: E0314 00:37:36.089899 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.091674 kubelet[2583]: W0314 00:37:36.089925 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.091674 kubelet[2583]: E0314 00:37:36.089947 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.118446 kubelet[2583]: E0314 00:37:36.118266 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.118628 kubelet[2583]: W0314 00:37:36.118395 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.118628 kubelet[2583]: E0314 00:37:36.118510 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.158784 kubelet[2583]: E0314 00:37:36.155713 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.158784 kubelet[2583]: W0314 00:37:36.155752 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.158784 kubelet[2583]: E0314 00:37:36.155782 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.166090 containerd[1451]: time="2026-03-14T00:37:36.163820603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:36.166090 containerd[1451]: time="2026-03-14T00:37:36.164083190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:36.166090 containerd[1451]: time="2026-03-14T00:37:36.164269212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:36.166090 containerd[1451]: time="2026-03-14T00:37:36.165246482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:36.229823 kubelet[2583]: E0314 00:37:36.228610 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:36.258537 systemd[1]: Started cri-containerd-796c36e282176e6e3d506031a5267f0228e98bd725a4bdf1403767c67505ce2c.scope - libcontainer container 796c36e282176e6e3d506031a5267f0228e98bd725a4bdf1403767c67505ce2c. Mar 14 00:37:36.281735 kubelet[2583]: E0314 00:37:36.280465 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.281735 kubelet[2583]: W0314 00:37:36.281473 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.281735 kubelet[2583]: E0314 00:37:36.281514 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.282347 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.289622 kubelet[2583]: W0314 00:37:36.282363 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.282383 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.283358 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.289622 kubelet[2583]: W0314 00:37:36.283371 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.283472 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.284077 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.289622 kubelet[2583]: W0314 00:37:36.284088 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.284102 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.289622 kubelet[2583]: E0314 00:37:36.284543 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290100 kubelet[2583]: W0314 00:37:36.284555 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290100 kubelet[2583]: E0314 00:37:36.284627 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290100 kubelet[2583]: E0314 00:37:36.284940 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290100 kubelet[2583]: W0314 00:37:36.284950 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290100 kubelet[2583]: E0314 00:37:36.284963 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290100 kubelet[2583]: E0314 00:37:36.285332 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290100 kubelet[2583]: W0314 00:37:36.285344 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290100 kubelet[2583]: E0314 00:37:36.285359 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290100 kubelet[2583]: E0314 00:37:36.286488 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290100 kubelet[2583]: W0314 00:37:36.286497 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.286507 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.286807 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290480 kubelet[2583]: W0314 00:37:36.286819 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.286832 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.287080 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290480 kubelet[2583]: W0314 00:37:36.287090 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.287101 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.287428 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290480 kubelet[2583]: W0314 00:37:36.287438 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290480 kubelet[2583]: E0314 00:37:36.287450 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290837 kubelet[2583]: E0314 00:37:36.289493 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290837 kubelet[2583]: W0314 00:37:36.289507 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290837 kubelet[2583]: E0314 00:37:36.289523 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.290955 kubelet[2583]: E0314 00:37:36.290875 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.290955 kubelet[2583]: W0314 00:37:36.290887 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.290955 kubelet[2583]: E0314 00:37:36.290901 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.293407 kubelet[2583]: E0314 00:37:36.292466 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.293407 kubelet[2583]: W0314 00:37:36.292483 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.293407 kubelet[2583]: E0314 00:37:36.292499 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.294026 kubelet[2583]: E0314 00:37:36.293802 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.294528 kubelet[2583]: W0314 00:37:36.294116 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.294528 kubelet[2583]: E0314 00:37:36.294142 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.298081 kubelet[2583]: E0314 00:37:36.298018 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.298761 kubelet[2583]: W0314 00:37:36.298440 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.298761 kubelet[2583]: E0314 00:37:36.298535 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.300412 kubelet[2583]: E0314 00:37:36.300166 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.300412 kubelet[2583]: W0314 00:37:36.300184 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.300412 kubelet[2583]: E0314 00:37:36.300205 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.301892 kubelet[2583]: E0314 00:37:36.301717 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.301892 kubelet[2583]: W0314 00:37:36.301732 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.301892 kubelet[2583]: E0314 00:37:36.301749 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.309865 kubelet[2583]: E0314 00:37:36.309786 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.309865 kubelet[2583]: W0314 00:37:36.309836 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.309865 kubelet[2583]: E0314 00:37:36.309864 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.312038 kubelet[2583]: E0314 00:37:36.311899 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.312038 kubelet[2583]: W0314 00:37:36.311939 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.312038 kubelet[2583]: E0314 00:37:36.311964 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.313356 kubelet[2583]: E0314 00:37:36.313272 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.313356 kubelet[2583]: W0314 00:37:36.313339 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.313628 kubelet[2583]: E0314 00:37:36.313363 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.314009 kubelet[2583]: I0314 00:37:36.313437 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/72792cd6-748a-469c-b9e2-1b61caf289ee-socket-dir\") pod \"csi-node-driver-q92fd\" (UID: \"72792cd6-748a-469c-b9e2-1b61caf289ee\") " pod="calico-system/csi-node-driver-q92fd" Mar 14 00:37:36.314795 kubelet[2583]: E0314 00:37:36.314091 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.314795 kubelet[2583]: W0314 00:37:36.314103 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.314795 kubelet[2583]: E0314 00:37:36.314118 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.315357 kubelet[2583]: E0314 00:37:36.315270 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.315357 kubelet[2583]: W0314 00:37:36.315336 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.315357 kubelet[2583]: E0314 00:37:36.315355 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.317955 kubelet[2583]: E0314 00:37:36.317932 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.318088 kubelet[2583]: W0314 00:37:36.318035 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.318088 kubelet[2583]: E0314 00:37:36.318061 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.318908 kubelet[2583]: I0314 00:37:36.318252 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72792cd6-748a-469c-b9e2-1b61caf289ee-kubelet-dir\") pod \"csi-node-driver-q92fd\" (UID: \"72792cd6-748a-469c-b9e2-1b61caf289ee\") " pod="calico-system/csi-node-driver-q92fd" Mar 14 00:37:36.319536 kubelet[2583]: E0314 00:37:36.319516 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.319793 kubelet[2583]: W0314 00:37:36.319775 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.325315 kubelet[2583]: E0314 00:37:36.321683 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.325315 kubelet[2583]: I0314 00:37:36.321732 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/72792cd6-748a-469c-b9e2-1b61caf289ee-registration-dir\") pod \"csi-node-driver-q92fd\" (UID: \"72792cd6-748a-469c-b9e2-1b61caf289ee\") " pod="calico-system/csi-node-driver-q92fd" Mar 14 00:37:36.325315 kubelet[2583]: E0314 00:37:36.322140 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.325315 kubelet[2583]: W0314 00:37:36.322154 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.325315 kubelet[2583]: E0314 00:37:36.322169 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.325315 kubelet[2583]: I0314 00:37:36.322374 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/72792cd6-748a-469c-b9e2-1b61caf289ee-varrun\") pod \"csi-node-driver-q92fd\" (UID: \"72792cd6-748a-469c-b9e2-1b61caf289ee\") " pod="calico-system/csi-node-driver-q92fd" Mar 14 00:37:36.325315 kubelet[2583]: E0314 00:37:36.323230 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.325315 kubelet[2583]: W0314 00:37:36.323244 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.325315 kubelet[2583]: E0314 00:37:36.323260 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.325715 kubelet[2583]: E0314 00:37:36.323818 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.325715 kubelet[2583]: W0314 00:37:36.323830 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.325715 kubelet[2583]: E0314 00:37:36.323847 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.325715 kubelet[2583]: E0314 00:37:36.324780 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.325715 kubelet[2583]: W0314 00:37:36.324793 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.325715 kubelet[2583]: E0314 00:37:36.324996 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.326154 kubelet[2583]: E0314 00:37:36.325945 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.326154 kubelet[2583]: W0314 00:37:36.326031 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.326154 kubelet[2583]: E0314 00:37:36.326046 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.326472 kubelet[2583]: I0314 00:37:36.326443 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6gq\" (UniqueName: \"kubernetes.io/projected/72792cd6-748a-469c-b9e2-1b61caf289ee-kube-api-access-qw6gq\") pod \"csi-node-driver-q92fd\" (UID: \"72792cd6-748a-469c-b9e2-1b61caf289ee\") " pod="calico-system/csi-node-driver-q92fd" Mar 14 00:37:36.327375 kubelet[2583]: E0314 00:37:36.327122 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.327375 kubelet[2583]: W0314 00:37:36.327136 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.327375 kubelet[2583]: E0314 00:37:36.327192 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.328263 kubelet[2583]: E0314 00:37:36.328246 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.328626 kubelet[2583]: W0314 00:37:36.328452 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.328626 kubelet[2583]: E0314 00:37:36.328478 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.329423 kubelet[2583]: E0314 00:37:36.329407 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.329494 kubelet[2583]: W0314 00:37:36.329480 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.329679 kubelet[2583]: E0314 00:37:36.329552 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.330153 kubelet[2583]: E0314 00:37:36.330103 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.330153 kubelet[2583]: W0314 00:37:36.330119 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.330153 kubelet[2583]: E0314 00:37:36.330133 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.330891 kubelet[2583]: E0314 00:37:36.330802 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.330891 kubelet[2583]: W0314 00:37:36.330817 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.330891 kubelet[2583]: E0314 00:37:36.330837 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.433220 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.439507 kubelet[2583]: W0314 00:37:36.433254 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.433313 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.435843 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.439507 kubelet[2583]: W0314 00:37:36.435857 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.435871 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.437477 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.439507 kubelet[2583]: W0314 00:37:36.437492 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.437506 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.439507 kubelet[2583]: E0314 00:37:36.438325 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.439990 kubelet[2583]: W0314 00:37:36.438337 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.439990 kubelet[2583]: E0314 00:37:36.438349 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.439990 kubelet[2583]: E0314 00:37:36.438878 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.439990 kubelet[2583]: W0314 00:37:36.439236 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.439990 kubelet[2583]: E0314 00:37:36.439250 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.440740 kubelet[2583]: E0314 00:37:36.440525 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.440740 kubelet[2583]: W0314 00:37:36.440542 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.440740 kubelet[2583]: E0314 00:37:36.440554 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.447481 kubelet[2583]: E0314 00:37:36.441826 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.447481 kubelet[2583]: W0314 00:37:36.441840 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.447481 kubelet[2583]: E0314 00:37:36.441930 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.447481 kubelet[2583]: E0314 00:37:36.442734 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.447481 kubelet[2583]: W0314 00:37:36.442747 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.447481 kubelet[2583]: E0314 00:37:36.442759 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.447481 kubelet[2583]: E0314 00:37:36.443976 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.447481 kubelet[2583]: W0314 00:37:36.443987 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.447481 kubelet[2583]: E0314 00:37:36.443999 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.448064 kubelet[2583]: E0314 00:37:36.447938 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.448064 kubelet[2583]: W0314 00:37:36.447953 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.448064 kubelet[2583]: E0314 00:37:36.447968 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.449897 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.460269 kubelet[2583]: W0314 00:37:36.449915 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.449931 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.450199 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.460269 kubelet[2583]: W0314 00:37:36.450211 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.450223 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.451075 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.460269 kubelet[2583]: W0314 00:37:36.451086 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.451098 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.460269 kubelet[2583]: E0314 00:37:36.460094 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.460861 kubelet[2583]: W0314 00:37:36.460109 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.460861 kubelet[2583]: E0314 00:37:36.460124 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.461799 kubelet[2583]: E0314 00:37:36.461750 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.461799 kubelet[2583]: W0314 00:37:36.461787 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.461927 kubelet[2583]: E0314 00:37:36.461803 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.468464 kubelet[2583]: E0314 00:37:36.465621 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.468464 kubelet[2583]: W0314 00:37:36.465653 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.468464 kubelet[2583]: E0314 00:37:36.465669 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.468464 kubelet[2583]: E0314 00:37:36.466449 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.468464 kubelet[2583]: W0314 00:37:36.466462 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.468464 kubelet[2583]: E0314 00:37:36.466475 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.474511 kubelet[2583]: E0314 00:37:36.473255 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.474511 kubelet[2583]: W0314 00:37:36.473275 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.474511 kubelet[2583]: E0314 00:37:36.473342 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.474511 kubelet[2583]: E0314 00:37:36.474264 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.479642 kubelet[2583]: W0314 00:37:36.474278 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.479642 kubelet[2583]: E0314 00:37:36.476739 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.481226 kubelet[2583]: E0314 00:37:36.480115 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.482489 kubelet[2583]: W0314 00:37:36.481383 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.484440 kubelet[2583]: E0314 00:37:36.482750 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.485166 kubelet[2583]: E0314 00:37:36.484724 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.485166 kubelet[2583]: W0314 00:37:36.484757 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.485166 kubelet[2583]: E0314 00:37:36.484777 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.487260 kubelet[2583]: E0314 00:37:36.486755 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.487260 kubelet[2583]: W0314 00:37:36.486838 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.487260 kubelet[2583]: E0314 00:37:36.486859 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.489516 containerd[1451]: time="2026-03-14T00:37:36.487836891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gttsx,Uid:2cf7be10-049d-425b-aabf-aebc5ad677b3,Namespace:calico-system,Attempt:0,}" Mar 14 00:37:36.490894 kubelet[2583]: E0314 00:37:36.489802 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.492115 kubelet[2583]: W0314 00:37:36.491024 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.493494 kubelet[2583]: E0314 00:37:36.492341 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.499914 kubelet[2583]: E0314 00:37:36.498801 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.501148 kubelet[2583]: W0314 00:37:36.500079 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.506105 kubelet[2583]: E0314 00:37:36.505465 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.508648 kubelet[2583]: E0314 00:37:36.507531 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.508648 kubelet[2583]: W0314 00:37:36.507623 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.508648 kubelet[2583]: E0314 00:37:36.507655 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.532046 containerd[1451]: time="2026-03-14T00:37:36.531995323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b9b979b-54z4p,Uid:dd1db0a4-3ca6-4c13-ab5e-a62c7fc81702,Namespace:calico-system,Attempt:0,} returns sandbox id \"796c36e282176e6e3d506031a5267f0228e98bd725a4bdf1403767c67505ce2c\"" Mar 14 00:37:36.541171 kubelet[2583]: E0314 00:37:36.540898 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:36.548211 containerd[1451]: time="2026-03-14T00:37:36.548169588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:37:36.557807 kubelet[2583]: E0314 00:37:36.557697 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:36.557807 kubelet[2583]: W0314 00:37:36.557747 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:36.557807 kubelet[2583]: E0314 00:37:36.557775 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:36.595734 containerd[1451]: time="2026-03-14T00:37:36.595182305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:36.595734 containerd[1451]: time="2026-03-14T00:37:36.595363206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:36.595734 containerd[1451]: time="2026-03-14T00:37:36.595386361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:36.595734 containerd[1451]: time="2026-03-14T00:37:36.595513300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:36.662069 systemd[1]: Started cri-containerd-6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93.scope - libcontainer container 6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93. Mar 14 00:37:36.747550 containerd[1451]: time="2026-03-14T00:37:36.744371035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gttsx,Uid:2cf7be10-049d-425b-aabf-aebc5ad677b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\"" Mar 14 00:37:37.627752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815819920.mount: Deactivated successfully. Mar 14 00:37:37.800838 kubelet[2583]: E0314 00:37:37.799997 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:39.798664 kubelet[2583]: E0314 00:37:39.798547 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:40.140502 containerd[1451]: time="2026-03-14T00:37:40.139718161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:40.142724 containerd[1451]: time="2026-03-14T00:37:40.142299742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:37:40.153758 containerd[1451]: time="2026-03-14T00:37:40.153075507Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:40.174439 containerd[1451]: time="2026-03-14T00:37:40.174241167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:40.176186 containerd[1451]: time="2026-03-14T00:37:40.174996673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.623812901s" Mar 14 00:37:40.176612 containerd[1451]: time="2026-03-14T00:37:40.176508402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:37:40.178864 containerd[1451]: time="2026-03-14T00:37:40.178631569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:37:40.218422 containerd[1451]: time="2026-03-14T00:37:40.218337002Z" level=info msg="CreateContainer within sandbox \"796c36e282176e6e3d506031a5267f0228e98bd725a4bdf1403767c67505ce2c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:37:40.286704 containerd[1451]: time="2026-03-14T00:37:40.286510759Z" level=info msg="CreateContainer within sandbox \"796c36e282176e6e3d506031a5267f0228e98bd725a4bdf1403767c67505ce2c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"49e60feb2cdc110b733619cae71c689b6cb665d33e1e6aebf5d0c892a0f4e32f\"" Mar 14 00:37:40.289051 containerd[1451]: time="2026-03-14T00:37:40.288916020Z" level=info msg="StartContainer for \"49e60feb2cdc110b733619cae71c689b6cb665d33e1e6aebf5d0c892a0f4e32f\"" Mar 14 00:37:40.542743 systemd[1]: Started cri-containerd-49e60feb2cdc110b733619cae71c689b6cb665d33e1e6aebf5d0c892a0f4e32f.scope - libcontainer container 49e60feb2cdc110b733619cae71c689b6cb665d33e1e6aebf5d0c892a0f4e32f. Mar 14 00:37:40.890888 containerd[1451]: time="2026-03-14T00:37:40.890642002Z" level=info msg="StartContainer for \"49e60feb2cdc110b733619cae71c689b6cb665d33e1e6aebf5d0c892a0f4e32f\" returns successfully" Mar 14 00:37:41.235250 kubelet[2583]: E0314 00:37:41.231685 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:41.285397 kubelet[2583]: E0314 00:37:41.285199 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.285397 kubelet[2583]: W0314 00:37:41.285241 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.285397 kubelet[2583]: E0314 00:37:41.285267 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.286615 kubelet[2583]: E0314 00:37:41.285778 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.286615 kubelet[2583]: W0314 00:37:41.285794 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.286615 kubelet[2583]: E0314 00:37:41.285808 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.286615 kubelet[2583]: E0314 00:37:41.286216 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.286615 kubelet[2583]: W0314 00:37:41.286227 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.286615 kubelet[2583]: E0314 00:37:41.286239 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.287505 kubelet[2583]: E0314 00:37:41.287455 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.287505 kubelet[2583]: W0314 00:37:41.287495 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.287663 kubelet[2583]: E0314 00:37:41.287514 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.291336 kubelet[2583]: E0314 00:37:41.291250 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.291482 kubelet[2583]: W0314 00:37:41.291356 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.291482 kubelet[2583]: E0314 00:37:41.291391 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.292253 kubelet[2583]: E0314 00:37:41.292191 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.292253 kubelet[2583]: W0314 00:37:41.292224 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.292253 kubelet[2583]: E0314 00:37:41.292237 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.292846 kubelet[2583]: E0314 00:37:41.292791 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.292846 kubelet[2583]: W0314 00:37:41.292828 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.292846 kubelet[2583]: E0314 00:37:41.292843 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.293194 kubelet[2583]: E0314 00:37:41.293114 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.293194 kubelet[2583]: W0314 00:37:41.293125 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.293194 kubelet[2583]: E0314 00:37:41.293138 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.297362 kubelet[2583]: E0314 00:37:41.293731 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.297362 kubelet[2583]: W0314 00:37:41.293750 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.297362 kubelet[2583]: E0314 00:37:41.293765 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.297362 kubelet[2583]: E0314 00:37:41.294089 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.297362 kubelet[2583]: W0314 00:37:41.294099 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.297362 kubelet[2583]: E0314 00:37:41.294112 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.298388 kubelet[2583]: E0314 00:37:41.298300 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.298388 kubelet[2583]: W0314 00:37:41.298341 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.298388 kubelet[2583]: E0314 00:37:41.298356 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.300809 kubelet[2583]: E0314 00:37:41.300686 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.300809 kubelet[2583]: W0314 00:37:41.300719 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.300809 kubelet[2583]: E0314 00:37:41.300735 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.301337 kubelet[2583]: E0314 00:37:41.301306 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.301804 kubelet[2583]: W0314 00:37:41.301478 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.301804 kubelet[2583]: E0314 00:37:41.301521 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.302320 kubelet[2583]: E0314 00:37:41.302303 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.302418 kubelet[2583]: W0314 00:37:41.302402 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.302537 kubelet[2583]: E0314 00:37:41.302521 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.304670 kubelet[2583]: E0314 00:37:41.303892 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.304670 kubelet[2583]: W0314 00:37:41.303908 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.304670 kubelet[2583]: E0314 00:37:41.303924 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.306140 kubelet[2583]: E0314 00:37:41.306126 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.306221 kubelet[2583]: W0314 00:37:41.306203 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.306300 kubelet[2583]: E0314 00:37:41.306282 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.308876 kubelet[2583]: E0314 00:37:41.308862 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.309110 kubelet[2583]: W0314 00:37:41.309095 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.309218 kubelet[2583]: E0314 00:37:41.309205 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.312679 kubelet[2583]: E0314 00:37:41.312654 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.313396 kubelet[2583]: W0314 00:37:41.313161 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.313396 kubelet[2583]: E0314 00:37:41.313209 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.314419 kubelet[2583]: E0314 00:37:41.314406 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.314849 kubelet[2583]: W0314 00:37:41.314693 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.314849 kubelet[2583]: E0314 00:37:41.314715 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.316542 kubelet[2583]: E0314 00:37:41.316242 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.316542 kubelet[2583]: W0314 00:37:41.316265 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.316542 kubelet[2583]: E0314 00:37:41.316285 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.317492 kubelet[2583]: E0314 00:37:41.317457 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.317492 kubelet[2583]: W0314 00:37:41.317478 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.317492 kubelet[2583]: E0314 00:37:41.317492 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.318620 kubelet[2583]: E0314 00:37:41.318464 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.318620 kubelet[2583]: W0314 00:37:41.318482 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.318620 kubelet[2583]: E0314 00:37:41.318506 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.325269 kubelet[2583]: E0314 00:37:41.325074 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.325269 kubelet[2583]: W0314 00:37:41.325092 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.325269 kubelet[2583]: E0314 00:37:41.325107 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.327113 kubelet[2583]: E0314 00:37:41.327024 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.327113 kubelet[2583]: W0314 00:37:41.327073 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.327113 kubelet[2583]: E0314 00:37:41.327126 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.329850 kubelet[2583]: E0314 00:37:41.329613 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.329850 kubelet[2583]: W0314 00:37:41.329634 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.329850 kubelet[2583]: E0314 00:37:41.329649 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.331396 kubelet[2583]: E0314 00:37:41.331358 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.331396 kubelet[2583]: W0314 00:37:41.331378 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.331396 kubelet[2583]: E0314 00:37:41.331394 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.333135 kubelet[2583]: E0314 00:37:41.331837 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.333135 kubelet[2583]: W0314 00:37:41.331853 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.333135 kubelet[2583]: E0314 00:37:41.331866 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.333135 kubelet[2583]: E0314 00:37:41.333047 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.333135 kubelet[2583]: W0314 00:37:41.333062 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.333135 kubelet[2583]: E0314 00:37:41.333076 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.334802 kubelet[2583]: E0314 00:37:41.334740 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.335201 kubelet[2583]: W0314 00:37:41.335103 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.335201 kubelet[2583]: E0314 00:37:41.335147 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.337312 kubelet[2583]: E0314 00:37:41.337126 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.337312 kubelet[2583]: W0314 00:37:41.337200 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.337312 kubelet[2583]: E0314 00:37:41.337219 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.339401 kubelet[2583]: E0314 00:37:41.339016 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.339401 kubelet[2583]: W0314 00:37:41.339033 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.339401 kubelet[2583]: E0314 00:37:41.339047 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.342623 kubelet[2583]: E0314 00:37:41.342555 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.342623 kubelet[2583]: W0314 00:37:41.342620 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.342754 kubelet[2583]: E0314 00:37:41.342631 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.343232 kubelet[2583]: E0314 00:37:41.343063 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:37:41.343232 kubelet[2583]: W0314 00:37:41.343094 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:37:41.343232 kubelet[2583]: E0314 00:37:41.343142 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:37:41.711771 containerd[1451]: time="2026-03-14T00:37:41.711085833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:41.712878 containerd[1451]: time="2026-03-14T00:37:41.712792359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:37:41.715687 containerd[1451]: time="2026-03-14T00:37:41.715493983Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:41.725611 containerd[1451]: time="2026-03-14T00:37:41.722094649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:41.726039 containerd[1451]: time="2026-03-14T00:37:41.725993036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.547272759s" Mar 14 00:37:41.726131 containerd[1451]: time="2026-03-14T00:37:41.726111831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:37:41.743831 containerd[1451]: time="2026-03-14T00:37:41.743781262Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:37:41.797295 containerd[1451]: time="2026-03-14T00:37:41.797238978Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec\"" Mar 14 00:37:41.798288 kubelet[2583]: E0314 00:37:41.798104 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:41.800469 containerd[1451]: time="2026-03-14T00:37:41.799412117Z" level=info msg="StartContainer for \"454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec\"" Mar 14 00:37:41.824158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879760006.mount: Deactivated successfully. Mar 14 00:37:42.113891 systemd[1]: Started cri-containerd-454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec.scope - libcontainer container 454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec. Mar 14 00:37:42.171709 containerd[1451]: time="2026-03-14T00:37:42.171619177Z" level=info msg="StartContainer for \"454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec\" returns successfully" Mar 14 00:37:42.191818 systemd[1]: cri-containerd-454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec.scope: Deactivated successfully. Mar 14 00:37:42.239979 kubelet[2583]: I0314 00:37:42.239899 2583 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:37:42.241858 kubelet[2583]: E0314 00:37:42.240423 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:42.243358 containerd[1451]: time="2026-03-14T00:37:42.242956088Z" level=info msg="shim disconnected" id=454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec namespace=k8s.io Mar 14 00:37:42.243358 containerd[1451]: time="2026-03-14T00:37:42.243031491Z" level=warning msg="cleaning up after shim disconnected" id=454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec namespace=k8s.io Mar 14 00:37:42.243358 containerd[1451]: time="2026-03-14T00:37:42.243047422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:37:42.281325 kubelet[2583]: I0314 00:37:42.281210 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-76b9b979b-54z4p" podStartSLOduration=3.650429649 podStartE2EDuration="7.28119708s" podCreationTimestamp="2026-03-14 00:37:35 +0000 UTC" firstStartedPulling="2026-03-14 00:37:36.547202888 +0000 UTC m=+22.187289974" lastFinishedPulling="2026-03-14 00:37:40.177970319 +0000 UTC m=+25.818057405" observedRunningTime="2026-03-14 00:37:41.307780635 +0000 UTC m=+26.947867741" watchObservedRunningTime="2026-03-14 00:37:42.28119708 +0000 UTC m=+27.921284165" Mar 14 00:37:42.794072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-454a5e7d896773e010b639057c1f1793a49b8789d8fd0f22d21b64ff1f3ec1ec-rootfs.mount: Deactivated successfully. Mar 14 00:37:43.261908 containerd[1451]: time="2026-03-14T00:37:43.261806114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:37:43.798370 kubelet[2583]: E0314 00:37:43.797855 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:45.800059 kubelet[2583]: E0314 00:37:45.799977 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:48.409115 kubelet[2583]: E0314 00:37:48.405667 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:50.408547 kubelet[2583]: E0314 00:37:50.407775 2583 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.518s" Mar 14 00:37:51.621671 kubelet[2583]: E0314 00:37:51.620330 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:53.798759 kubelet[2583]: E0314 00:37:53.798673 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:55.803847 kubelet[2583]: E0314 00:37:55.798899 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:57.806196 kubelet[2583]: E0314 00:37:57.805806 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:37:59.799359 kubelet[2583]: E0314 00:37:59.799069 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:01.798693 kubelet[2583]: E0314 00:38:01.798621 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:02.130105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871660630.mount: Deactivated successfully. Mar 14 00:38:02.287836 containerd[1451]: time="2026-03-14T00:38:02.287210614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:38:02.316740 containerd[1451]: time="2026-03-14T00:38:02.316193992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 19.054316575s" Mar 14 00:38:02.316740 containerd[1451]: time="2026-03-14T00:38:02.316282972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:38:02.336125 containerd[1451]: time="2026-03-14T00:38:02.334746916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:02.336299 containerd[1451]: time="2026-03-14T00:38:02.336264895Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:02.349773 containerd[1451]: time="2026-03-14T00:38:02.345340011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:02.359287 containerd[1451]: time="2026-03-14T00:38:02.359215307Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:38:02.429234 containerd[1451]: time="2026-03-14T00:38:02.428873168Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e\"" Mar 14 00:38:02.432525 containerd[1451]: time="2026-03-14T00:38:02.432369264Z" level=info msg="StartContainer for \"18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e\"" Mar 14 00:38:02.617390 systemd[1]: Started cri-containerd-18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e.scope - libcontainer container 18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e. Mar 14 00:38:02.817815 containerd[1451]: time="2026-03-14T00:38:02.817365009Z" level=info msg="StartContainer for \"18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e\" returns successfully" Mar 14 00:38:02.885110 kubelet[2583]: I0314 00:38:02.885027 2583 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:38:02.885759 kubelet[2583]: E0314 00:38:02.885522 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:02.906335 systemd[1]: cri-containerd-18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e.scope: Deactivated successfully. Mar 14 00:38:03.129664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e-rootfs.mount: Deactivated successfully. Mar 14 00:38:03.388338 containerd[1451]: time="2026-03-14T00:38:03.388079704Z" level=info msg="shim disconnected" id=18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e namespace=k8s.io Mar 14 00:38:03.388338 containerd[1451]: time="2026-03-14T00:38:03.388167843Z" level=warning msg="cleaning up after shim disconnected" id=18624f44c780255973f3093def4a079eae3837d764a4023484f301cef1b42a2e namespace=k8s.io Mar 14 00:38:03.388338 containerd[1451]: time="2026-03-14T00:38:03.388182008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:38:03.644053 kubelet[2583]: E0314 00:38:03.642228 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:03.661713 containerd[1451]: time="2026-03-14T00:38:03.661404685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:38:03.797761 kubelet[2583]: E0314 00:38:03.797701 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:05.798994 kubelet[2583]: E0314 00:38:05.798906 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:07.798144 kubelet[2583]: E0314 00:38:07.797978 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:09.798832 kubelet[2583]: E0314 00:38:09.798703 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:10.992332 containerd[1451]: time="2026-03-14T00:38:10.992036102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:10.997739 containerd[1451]: time="2026-03-14T00:38:10.997221531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:38:11.004539 containerd[1451]: time="2026-03-14T00:38:11.001864778Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:11.017818 containerd[1451]: time="2026-03-14T00:38:11.017171079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:11.025893 containerd[1451]: time="2026-03-14T00:38:11.025375470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.36389057s" Mar 14 00:38:11.025893 containerd[1451]: time="2026-03-14T00:38:11.025438374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:38:11.058773 containerd[1451]: time="2026-03-14T00:38:11.058430572Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:38:11.124364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430157170.mount: Deactivated successfully. Mar 14 00:38:11.143620 containerd[1451]: time="2026-03-14T00:38:11.141096659Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02\"" Mar 14 00:38:11.150769 containerd[1451]: time="2026-03-14T00:38:11.148416032Z" level=info msg="StartContainer for \"73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02\"" Mar 14 00:38:11.295168 systemd[1]: Started cri-containerd-73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02.scope - libcontainer container 73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02. Mar 14 00:38:11.514254 containerd[1451]: time="2026-03-14T00:38:11.514200757Z" level=info msg="StartContainer for \"73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02\" returns successfully" Mar 14 00:38:11.803507 kubelet[2583]: E0314 00:38:11.797917 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:13.192115 systemd[1]: cri-containerd-73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02.scope: Deactivated successfully. Mar 14 00:38:13.193046 systemd[1]: cri-containerd-73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02.scope: Consumed 1.252s CPU time. Mar 14 00:38:13.242726 kubelet[2583]: I0314 00:38:13.242665 2583 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:38:13.272184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02-rootfs.mount: Deactivated successfully. Mar 14 00:38:13.316887 containerd[1451]: time="2026-03-14T00:38:13.312101996Z" level=info msg="shim disconnected" id=73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02 namespace=k8s.io Mar 14 00:38:13.316887 containerd[1451]: time="2026-03-14T00:38:13.312176581Z" level=warning msg="cleaning up after shim disconnected" id=73dc8d7f4cdaf975331d24e7190e21155d7a4d63d66ab9d7d01a29d219388f02 namespace=k8s.io Mar 14 00:38:13.316887 containerd[1451]: time="2026-03-14T00:38:13.312194003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:38:13.520181 systemd[1]: Created slice kubepods-burstable-pod1528cef9_ef7a_4b03_b27c_111acf337f79.slice - libcontainer container kubepods-burstable-pod1528cef9_ef7a_4b03_b27c_111acf337f79.slice. Mar 14 00:38:13.571025 systemd[1]: Created slice kubepods-burstable-pod46df48d4_8ce4_4d83_97c1_d2d7b89d6608.slice - libcontainer container kubepods-burstable-pod46df48d4_8ce4_4d83_97c1_d2d7b89d6608.slice. Mar 14 00:38:13.622233 systemd[1]: Created slice kubepods-besteffort-pod3444a880_39b3_4cba_abbf_267ebeaaa2fc.slice - libcontainer container kubepods-besteffort-pod3444a880_39b3_4cba_abbf_267ebeaaa2fc.slice. Mar 14 00:38:13.662661 systemd[1]: Created slice kubepods-besteffort-pod8a946f90_08bb_4be5_826c_db54ba31997f.slice - libcontainer container kubepods-besteffort-pod8a946f90_08bb_4be5_826c_db54ba31997f.slice. Mar 14 00:38:13.681074 kubelet[2583]: I0314 00:38:13.680998 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1528cef9-ef7a-4b03-b27c-111acf337f79-config-volume\") pod \"coredns-7d764666f9-hmzcq\" (UID: \"1528cef9-ef7a-4b03-b27c-111acf337f79\") " pod="kube-system/coredns-7d764666f9-hmzcq" Mar 14 00:38:13.681295 kubelet[2583]: I0314 00:38:13.681082 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvhfm\" (UniqueName: \"kubernetes.io/projected/e17452f4-a642-40cd-ac57-08b53d428d2c-kube-api-access-zvhfm\") pod \"calico-apiserver-d4cbf978c-xvlsd\" (UID: \"e17452f4-a642-40cd-ac57-08b53d428d2c\") " pod="calico-system/calico-apiserver-d4cbf978c-xvlsd" Mar 14 00:38:13.681295 kubelet[2583]: I0314 00:38:13.681116 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3444a880-39b3-4cba-abbf-267ebeaaa2fc-goldmane-key-pair\") pod \"goldmane-9f7667bb8-d6kbv\" (UID: \"3444a880-39b3-4cba-abbf-267ebeaaa2fc\") " pod="calico-system/goldmane-9f7667bb8-d6kbv" Mar 14 00:38:13.681295 kubelet[2583]: I0314 00:38:13.681141 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-nginx-config\") pod \"whisker-5cb66794d-x6prv\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " pod="calico-system/whisker-5cb66794d-x6prv" Mar 14 00:38:13.681295 kubelet[2583]: I0314 00:38:13.681168 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-ca-bundle\") pod \"whisker-5cb66794d-x6prv\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " pod="calico-system/whisker-5cb66794d-x6prv" Mar 14 00:38:13.681295 kubelet[2583]: I0314 00:38:13.681193 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e17452f4-a642-40cd-ac57-08b53d428d2c-calico-apiserver-certs\") pod \"calico-apiserver-d4cbf978c-xvlsd\" (UID: \"e17452f4-a642-40cd-ac57-08b53d428d2c\") " pod="calico-system/calico-apiserver-d4cbf978c-xvlsd" Mar 14 00:38:13.682295 kubelet[2583]: I0314 00:38:13.681216 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444a880-39b3-4cba-abbf-267ebeaaa2fc-config\") pod \"goldmane-9f7667bb8-d6kbv\" (UID: \"3444a880-39b3-4cba-abbf-267ebeaaa2fc\") " pod="calico-system/goldmane-9f7667bb8-d6kbv" Mar 14 00:38:13.682295 kubelet[2583]: I0314 00:38:13.681241 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46df48d4-8ce4-4d83-97c1-d2d7b89d6608-config-volume\") pod \"coredns-7d764666f9-hsmxv\" (UID: \"46df48d4-8ce4-4d83-97c1-d2d7b89d6608\") " pod="kube-system/coredns-7d764666f9-hsmxv" Mar 14 00:38:13.682295 kubelet[2583]: I0314 00:38:13.681262 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtdg\" (UniqueName: \"kubernetes.io/projected/46df48d4-8ce4-4d83-97c1-d2d7b89d6608-kube-api-access-qxtdg\") pod \"coredns-7d764666f9-hsmxv\" (UID: \"46df48d4-8ce4-4d83-97c1-d2d7b89d6608\") " pod="kube-system/coredns-7d764666f9-hsmxv" Mar 14 00:38:13.682295 kubelet[2583]: I0314 00:38:13.681320 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-backend-key-pair\") pod \"whisker-5cb66794d-x6prv\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " pod="calico-system/whisker-5cb66794d-x6prv" Mar 14 00:38:13.682295 kubelet[2583]: I0314 00:38:13.681378 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvgx\" (UniqueName: \"kubernetes.io/projected/8a946f90-08bb-4be5-826c-db54ba31997f-kube-api-access-gbvgx\") pod \"whisker-5cb66794d-x6prv\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " pod="calico-system/whisker-5cb66794d-x6prv" Mar 14 00:38:13.682549 kubelet[2583]: I0314 00:38:13.681406 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qvn\" (UniqueName: \"kubernetes.io/projected/1528cef9-ef7a-4b03-b27c-111acf337f79-kube-api-access-w9qvn\") pod \"coredns-7d764666f9-hmzcq\" (UID: \"1528cef9-ef7a-4b03-b27c-111acf337f79\") " pod="kube-system/coredns-7d764666f9-hmzcq" Mar 14 00:38:13.682549 kubelet[2583]: I0314 00:38:13.681427 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3444a880-39b3-4cba-abbf-267ebeaaa2fc-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-d6kbv\" (UID: \"3444a880-39b3-4cba-abbf-267ebeaaa2fc\") " pod="calico-system/goldmane-9f7667bb8-d6kbv" Mar 14 00:38:13.682549 kubelet[2583]: I0314 00:38:13.681455 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txxf\" (UniqueName: \"kubernetes.io/projected/3444a880-39b3-4cba-abbf-267ebeaaa2fc-kube-api-access-8txxf\") pod \"goldmane-9f7667bb8-d6kbv\" (UID: \"3444a880-39b3-4cba-abbf-267ebeaaa2fc\") " pod="calico-system/goldmane-9f7667bb8-d6kbv" Mar 14 00:38:13.713111 systemd[1]: Created slice kubepods-besteffort-pode17452f4_a642_40cd_ac57_08b53d428d2c.slice - libcontainer container kubepods-besteffort-pode17452f4_a642_40cd_ac57_08b53d428d2c.slice. Mar 14 00:38:13.745964 systemd[1]: Created slice kubepods-besteffort-podf5b883cf_5732_4d59_9bf1_5e7701804c52.slice - libcontainer container kubepods-besteffort-podf5b883cf_5732_4d59_9bf1_5e7701804c52.slice. Mar 14 00:38:13.767617 systemd[1]: Created slice kubepods-besteffort-pod9e3a983d_b049_4edf_864f_a102bf11f3b8.slice - libcontainer container kubepods-besteffort-pod9e3a983d_b049_4edf_864f_a102bf11f3b8.slice. Mar 14 00:38:13.795239 kubelet[2583]: I0314 00:38:13.782103 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dsh\" (UniqueName: \"kubernetes.io/projected/9e3a983d-b049-4edf-864f-a102bf11f3b8-kube-api-access-f9dsh\") pod \"calico-kube-controllers-5cc94955b9-l2gbs\" (UID: \"9e3a983d-b049-4edf-864f-a102bf11f3b8\") " pod="calico-system/calico-kube-controllers-5cc94955b9-l2gbs" Mar 14 00:38:13.795239 kubelet[2583]: I0314 00:38:13.782194 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9k9x\" (UniqueName: \"kubernetes.io/projected/f5b883cf-5732-4d59-9bf1-5e7701804c52-kube-api-access-t9k9x\") pod \"calico-apiserver-d4cbf978c-p9t45\" (UID: \"f5b883cf-5732-4d59-9bf1-5e7701804c52\") " pod="calico-system/calico-apiserver-d4cbf978c-p9t45" Mar 14 00:38:13.795239 kubelet[2583]: I0314 00:38:13.782249 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e3a983d-b049-4edf-864f-a102bf11f3b8-tigera-ca-bundle\") pod \"calico-kube-controllers-5cc94955b9-l2gbs\" (UID: \"9e3a983d-b049-4edf-864f-a102bf11f3b8\") " pod="calico-system/calico-kube-controllers-5cc94955b9-l2gbs" Mar 14 00:38:13.795239 kubelet[2583]: I0314 00:38:13.782448 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5b883cf-5732-4d59-9bf1-5e7701804c52-calico-apiserver-certs\") pod \"calico-apiserver-d4cbf978c-p9t45\" (UID: \"f5b883cf-5732-4d59-9bf1-5e7701804c52\") " pod="calico-system/calico-apiserver-d4cbf978c-p9t45" Mar 14 00:38:13.857081 containerd[1451]: time="2026-03-14T00:38:13.852544046Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:38:13.881534 systemd[1]: Created slice kubepods-besteffort-pod72792cd6_748a_469c_b9e2_1b61caf289ee.slice - libcontainer container kubepods-besteffort-pod72792cd6_748a_469c_b9e2_1b61caf289ee.slice. Mar 14 00:38:13.906504 containerd[1451]: time="2026-03-14T00:38:13.905848890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q92fd,Uid:72792cd6-748a-469c-b9e2-1b61caf289ee,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:13.910261 kubelet[2583]: E0314 00:38:13.909297 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:13.910871 containerd[1451]: time="2026-03-14T00:38:13.910811083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hsmxv,Uid:46df48d4-8ce4-4d83-97c1-d2d7b89d6608,Namespace:kube-system,Attempt:0,}" Mar 14 00:38:13.960004 containerd[1451]: time="2026-03-14T00:38:13.959946659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d6kbv,Uid:3444a880-39b3-4cba-abbf-267ebeaaa2fc,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:14.011802 containerd[1451]: time="2026-03-14T00:38:14.011741085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb66794d-x6prv,Uid:8a946f90-08bb-4be5-826c-db54ba31997f,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:14.041012 containerd[1451]: time="2026-03-14T00:38:14.040964055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-xvlsd,Uid:e17452f4-a642-40cd-ac57-08b53d428d2c,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:14.089397 containerd[1451]: time="2026-03-14T00:38:14.088861283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-p9t45,Uid:f5b883cf-5732-4d59-9bf1-5e7701804c52,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:14.093713 containerd[1451]: time="2026-03-14T00:38:14.093291692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cc94955b9-l2gbs,Uid:9e3a983d-b049-4edf-864f-a102bf11f3b8,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:14.159911 kubelet[2583]: E0314 00:38:14.159014 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:14.160138 containerd[1451]: time="2026-03-14T00:38:14.160047060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmzcq,Uid:1528cef9-ef7a-4b03-b27c-111acf337f79,Namespace:kube-system,Attempt:0,}" Mar 14 00:38:14.325501 containerd[1451]: time="2026-03-14T00:38:14.322635870Z" level=info msg="CreateContainer within sandbox \"6d957934b5c17d51134d7595fe867332eda6e291654e3a2ffec1bd1d266bef93\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"798b764755de8c481d929fb471f0053e6baefb1ba766f3cab2fd28e3ab410dca\"" Mar 14 00:38:14.327692 containerd[1451]: time="2026-03-14T00:38:14.327126670Z" level=info msg="StartContainer for \"798b764755de8c481d929fb471f0053e6baefb1ba766f3cab2fd28e3ab410dca\"" Mar 14 00:38:14.527440 systemd[1]: Started cri-containerd-798b764755de8c481d929fb471f0053e6baefb1ba766f3cab2fd28e3ab410dca.scope - libcontainer container 798b764755de8c481d929fb471f0053e6baefb1ba766f3cab2fd28e3ab410dca. Mar 14 00:38:14.685484 containerd[1451]: time="2026-03-14T00:38:14.684724906Z" level=error msg="Failed to destroy network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.687106 containerd[1451]: time="2026-03-14T00:38:14.687019855Z" level=error msg="encountered an error cleaning up failed sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.687215 containerd[1451]: time="2026-03-14T00:38:14.687112775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q92fd,Uid:72792cd6-748a-469c-b9e2-1b61caf289ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.710171 kubelet[2583]: E0314 00:38:14.709519 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.710171 kubelet[2583]: E0314 00:38:14.709638 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q92fd" Mar 14 00:38:14.710171 kubelet[2583]: E0314 00:38:14.709663 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q92fd" Mar 14 00:38:14.710964 kubelet[2583]: E0314 00:38:14.710914 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q92fd_calico-system(72792cd6-748a-469c-b9e2-1b61caf289ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q92fd_calico-system(72792cd6-748a-469c-b9e2-1b61caf289ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:14.713405 containerd[1451]: time="2026-03-14T00:38:14.713281593Z" level=error msg="Failed to destroy network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.718780 containerd[1451]: time="2026-03-14T00:38:14.718701978Z" level=error msg="encountered an error cleaning up failed sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.718917 containerd[1451]: time="2026-03-14T00:38:14.718887285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb66794d-x6prv,Uid:8a946f90-08bb-4be5-826c-db54ba31997f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.720337 kubelet[2583]: E0314 00:38:14.720142 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.720337 kubelet[2583]: E0314 00:38:14.720265 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cb66794d-x6prv" Mar 14 00:38:14.720337 kubelet[2583]: E0314 00:38:14.720296 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cb66794d-x6prv" Mar 14 00:38:14.720511 kubelet[2583]: E0314 00:38:14.720401 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cb66794d-x6prv_calico-system(8a946f90-08bb-4be5-826c-db54ba31997f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cb66794d-x6prv_calico-system(8a946f90-08bb-4be5-826c-db54ba31997f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cb66794d-x6prv" podUID="8a946f90-08bb-4be5-826c-db54ba31997f" Mar 14 00:38:14.727184 containerd[1451]: time="2026-03-14T00:38:14.727153279Z" level=error msg="Failed to destroy network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.728285 containerd[1451]: time="2026-03-14T00:38:14.728204134Z" level=error msg="encountered an error cleaning up failed sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.730902 containerd[1451]: time="2026-03-14T00:38:14.730791336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hsmxv,Uid:46df48d4-8ce4-4d83-97c1-d2d7b89d6608,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.736923 kubelet[2583]: E0314 00:38:14.731152 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.736923 kubelet[2583]: E0314 00:38:14.731214 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hsmxv" Mar 14 00:38:14.736923 kubelet[2583]: E0314 00:38:14.731240 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hsmxv" Mar 14 00:38:14.737106 kubelet[2583]: E0314 00:38:14.731433 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-hsmxv_kube-system(46df48d4-8ce4-4d83-97c1-d2d7b89d6608)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-hsmxv_kube-system(46df48d4-8ce4-4d83-97c1-d2d7b89d6608)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hsmxv" podUID="46df48d4-8ce4-4d83-97c1-d2d7b89d6608" Mar 14 00:38:14.742663 containerd[1451]: time="2026-03-14T00:38:14.741205268Z" level=error msg="Failed to destroy network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.742663 containerd[1451]: time="2026-03-14T00:38:14.742098164Z" level=error msg="encountered an error cleaning up failed sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.742663 containerd[1451]: time="2026-03-14T00:38:14.742158744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmzcq,Uid:1528cef9-ef7a-4b03-b27c-111acf337f79,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.742837 kubelet[2583]: E0314 00:38:14.742541 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.742837 kubelet[2583]: E0314 00:38:14.742674 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hmzcq" Mar 14 00:38:14.742837 kubelet[2583]: E0314 00:38:14.742702 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hmzcq" Mar 14 00:38:14.742992 kubelet[2583]: E0314 00:38:14.742790 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-hmzcq_kube-system(1528cef9-ef7a-4b03-b27c-111acf337f79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-hmzcq_kube-system(1528cef9-ef7a-4b03-b27c-111acf337f79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hmzcq" podUID="1528cef9-ef7a-4b03-b27c-111acf337f79" Mar 14 00:38:14.756667 containerd[1451]: time="2026-03-14T00:38:14.756624422Z" level=info msg="StartContainer for \"798b764755de8c481d929fb471f0053e6baefb1ba766f3cab2fd28e3ab410dca\" returns successfully" Mar 14 00:38:14.764617 containerd[1451]: time="2026-03-14T00:38:14.764399563Z" level=error msg="Failed to destroy network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.765497 containerd[1451]: time="2026-03-14T00:38:14.765411877Z" level=error msg="encountered an error cleaning up failed sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.765651 containerd[1451]: time="2026-03-14T00:38:14.765515225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-p9t45,Uid:f5b883cf-5732-4d59-9bf1-5e7701804c52,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.765800 kubelet[2583]: E0314 00:38:14.765760 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.765860 kubelet[2583]: E0314 00:38:14.765813 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d4cbf978c-p9t45" Mar 14 00:38:14.765860 kubelet[2583]: E0314 00:38:14.765837 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d4cbf978c-p9t45" Mar 14 00:38:14.765988 kubelet[2583]: E0314 00:38:14.765890 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d4cbf978c-p9t45_calico-system(f5b883cf-5732-4d59-9bf1-5e7701804c52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d4cbf978c-p9t45_calico-system(f5b883cf-5732-4d59-9bf1-5e7701804c52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d4cbf978c-p9t45" podUID="f5b883cf-5732-4d59-9bf1-5e7701804c52" Mar 14 00:38:14.774092 containerd[1451]: time="2026-03-14T00:38:14.772675366Z" level=error msg="Failed to destroy network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.774687 containerd[1451]: time="2026-03-14T00:38:14.774506511Z" level=error msg="encountered an error cleaning up failed sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.774791 containerd[1451]: time="2026-03-14T00:38:14.774679497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d6kbv,Uid:3444a880-39b3-4cba-abbf-267ebeaaa2fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.775721 kubelet[2583]: E0314 00:38:14.775376 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.775721 kubelet[2583]: E0314 00:38:14.775462 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-d6kbv" Mar 14 00:38:14.775721 kubelet[2583]: E0314 00:38:14.775489 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-d6kbv" Mar 14 00:38:14.775888 kubelet[2583]: E0314 00:38:14.775551 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-d6kbv_calico-system(3444a880-39b3-4cba-abbf-267ebeaaa2fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-d6kbv_calico-system(3444a880-39b3-4cba-abbf-267ebeaaa2fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-d6kbv" podUID="3444a880-39b3-4cba-abbf-267ebeaaa2fc" Mar 14 00:38:14.782255 kubelet[2583]: I0314 00:38:14.781802 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:38:14.787900 containerd[1451]: time="2026-03-14T00:38:14.787762334Z" level=error msg="Failed to destroy network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.792045 kubelet[2583]: I0314 00:38:14.791644 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:38:14.796199 kubelet[2583]: I0314 00:38:14.795982 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:38:14.805539 containerd[1451]: time="2026-03-14T00:38:14.804479899Z" level=info msg="StopPodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\"" Mar 14 00:38:14.805539 containerd[1451]: time="2026-03-14T00:38:14.805533978Z" level=info msg="StopPodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\"" Mar 14 00:38:14.806273 containerd[1451]: time="2026-03-14T00:38:14.805942884Z" level=info msg="StopPodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\"" Mar 14 00:38:14.808851 containerd[1451]: time="2026-03-14T00:38:14.808670531Z" level=error msg="Failed to destroy network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.808998 containerd[1451]: time="2026-03-14T00:38:14.808955691Z" level=info msg="Ensure that sandbox 7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979 in task-service has been cleanup successfully" Mar 14 00:38:14.809376 containerd[1451]: time="2026-03-14T00:38:14.809344419Z" level=info msg="Ensure that sandbox 6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792 in task-service has been cleanup successfully" Mar 14 00:38:14.810157 containerd[1451]: time="2026-03-14T00:38:14.809506286Z" level=info msg="Ensure that sandbox bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a in task-service has been cleanup successfully" Mar 14 00:38:14.810157 containerd[1451]: time="2026-03-14T00:38:14.810086811Z" level=error msg="encountered an error cleaning up failed sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.810157 containerd[1451]: time="2026-03-14T00:38:14.810136722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cc94955b9-l2gbs,Uid:9e3a983d-b049-4edf-864f-a102bf11f3b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.811339 containerd[1451]: time="2026-03-14T00:38:14.810442018Z" level=error msg="encountered an error cleaning up failed sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.811339 containerd[1451]: time="2026-03-14T00:38:14.810497439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-xvlsd,Uid:e17452f4-a642-40cd-ac57-08b53d428d2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.816229 kubelet[2583]: E0314 00:38:14.816147 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.816229 kubelet[2583]: E0314 00:38:14.816222 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cc94955b9-l2gbs" Mar 14 00:38:14.816478 kubelet[2583]: E0314 00:38:14.816245 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cc94955b9-l2gbs" Mar 14 00:38:14.816478 kubelet[2583]: E0314 00:38:14.816334 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cc94955b9-l2gbs_calico-system(9e3a983d-b049-4edf-864f-a102bf11f3b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cc94955b9-l2gbs_calico-system(9e3a983d-b049-4edf-864f-a102bf11f3b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cc94955b9-l2gbs" podUID="9e3a983d-b049-4edf-864f-a102bf11f3b8" Mar 14 00:38:14.816478 kubelet[2583]: E0314 00:38:14.816406 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.816716 kubelet[2583]: E0314 00:38:14.816429 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d4cbf978c-xvlsd" Mar 14 00:38:14.816716 kubelet[2583]: E0314 00:38:14.816446 2583 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d4cbf978c-xvlsd" Mar 14 00:38:14.816716 kubelet[2583]: E0314 00:38:14.816480 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d4cbf978c-xvlsd_calico-system(e17452f4-a642-40cd-ac57-08b53d428d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d4cbf978c-xvlsd_calico-system(e17452f4-a642-40cd-ac57-08b53d428d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d4cbf978c-xvlsd" podUID="e17452f4-a642-40cd-ac57-08b53d428d2c" Mar 14 00:38:14.819271 kubelet[2583]: I0314 00:38:14.819166 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:38:14.826053 containerd[1451]: time="2026-03-14T00:38:14.824516563Z" level=info msg="StopPodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\"" Mar 14 00:38:14.826053 containerd[1451]: time="2026-03-14T00:38:14.825145178Z" level=info msg="Ensure that sandbox 0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b in task-service has been cleanup successfully" Mar 14 00:38:14.829110 kubelet[2583]: I0314 00:38:14.828980 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:38:14.833382 containerd[1451]: time="2026-03-14T00:38:14.833108161Z" level=info msg="StopPodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\"" Mar 14 00:38:14.833476 containerd[1451]: time="2026-03-14T00:38:14.833383393Z" level=info msg="Ensure that sandbox 0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78 in task-service has been cleanup successfully" Mar 14 00:38:14.912185 kubelet[2583]: I0314 00:38:14.912093 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:38:14.917248 containerd[1451]: time="2026-03-14T00:38:14.916937061Z" level=info msg="StopPodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\"" Mar 14 00:38:14.917803 containerd[1451]: time="2026-03-14T00:38:14.917750613Z" level=info msg="Ensure that sandbox 38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878 in task-service has been cleanup successfully" Mar 14 00:38:14.961767 containerd[1451]: time="2026-03-14T00:38:14.961165072Z" level=error msg="StopPodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" failed" error="failed to destroy network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.963003 kubelet[2583]: E0314 00:38:14.962257 2583 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:38:14.963003 kubelet[2583]: E0314 00:38:14.962366 2583 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979"} Mar 14 00:38:14.963003 kubelet[2583]: E0314 00:38:14.962440 2583 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5b883cf-5732-4d59-9bf1-5e7701804c52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:38:14.963003 kubelet[2583]: E0314 00:38:14.962477 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5b883cf-5732-4d59-9bf1-5e7701804c52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d4cbf978c-p9t45" podUID="f5b883cf-5732-4d59-9bf1-5e7701804c52" Mar 14 00:38:14.969619 kubelet[2583]: I0314 00:38:14.969156 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-gttsx" podStartSLOduration=2.945390437 podStartE2EDuration="39.969142268s" podCreationTimestamp="2026-03-14 00:37:35 +0000 UTC" firstStartedPulling="2026-03-14 00:37:36.755399594 +0000 UTC m=+22.395486679" lastFinishedPulling="2026-03-14 00:38:13.779151414 +0000 UTC m=+59.419238510" observedRunningTime="2026-03-14 00:38:14.93884424 +0000 UTC m=+60.578931326" watchObservedRunningTime="2026-03-14 00:38:14.969142268 +0000 UTC m=+60.609229354" Mar 14 00:38:14.969796 containerd[1451]: time="2026-03-14T00:38:14.969401533Z" level=error msg="StopPodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" failed" error="failed to destroy network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:14.970735 kubelet[2583]: E0314 00:38:14.970505 2583 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:38:14.970735 kubelet[2583]: E0314 00:38:14.970648 2583 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a"} Mar 14 00:38:14.971287 kubelet[2583]: E0314 00:38:14.970894 2583 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72792cd6-748a-469c-b9e2-1b61caf289ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:38:14.972524 kubelet[2583]: E0314 00:38:14.972373 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72792cd6-748a-469c-b9e2-1b61caf289ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q92fd" podUID="72792cd6-748a-469c-b9e2-1b61caf289ee" Mar 14 00:38:15.067139 containerd[1451]: time="2026-03-14T00:38:15.064893099Z" level=error msg="StopPodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" failed" error="failed to destroy network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:15.067398 kubelet[2583]: E0314 00:38:15.066651 2583 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:38:15.067398 kubelet[2583]: E0314 00:38:15.066712 2583 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792"} Mar 14 00:38:15.067398 kubelet[2583]: E0314 00:38:15.066757 2583 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1528cef9-ef7a-4b03-b27c-111acf337f79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:38:15.067398 kubelet[2583]: E0314 00:38:15.066799 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1528cef9-ef7a-4b03-b27c-111acf337f79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hmzcq" podUID="1528cef9-ef7a-4b03-b27c-111acf337f79" Mar 14 00:38:15.084074 containerd[1451]: time="2026-03-14T00:38:15.083906463Z" level=error msg="StopPodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" failed" error="failed to destroy network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:15.084438 kubelet[2583]: E0314 00:38:15.084212 2583 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:38:15.084438 kubelet[2583]: E0314 00:38:15.084334 2583 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78"} Mar 14 00:38:15.084438 kubelet[2583]: E0314 00:38:15.084384 2583 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3444a880-39b3-4cba-abbf-267ebeaaa2fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:38:15.084438 kubelet[2583]: E0314 00:38:15.084427 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3444a880-39b3-4cba-abbf-267ebeaaa2fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-d6kbv" podUID="3444a880-39b3-4cba-abbf-267ebeaaa2fc" Mar 14 00:38:15.091440 containerd[1451]: time="2026-03-14T00:38:15.091108103Z" level=error msg="StopPodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" failed" error="failed to destroy network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:15.092525 kubelet[2583]: E0314 00:38:15.091967 2583 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:38:15.092525 kubelet[2583]: E0314 00:38:15.092058 2583 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b"} Mar 14 00:38:15.092525 kubelet[2583]: E0314 00:38:15.092103 2583 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a946f90-08bb-4be5-826c-db54ba31997f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:38:15.092525 kubelet[2583]: E0314 00:38:15.092141 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a946f90-08bb-4be5-826c-db54ba31997f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cb66794d-x6prv" podUID="8a946f90-08bb-4be5-826c-db54ba31997f" Mar 14 00:38:15.136396 containerd[1451]: time="2026-03-14T00:38:15.136280995Z" level=error msg="StopPodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" failed" error="failed to destroy network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:38:15.136808 kubelet[2583]: E0314 00:38:15.136762 2583 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:38:15.136970 kubelet[2583]: E0314 00:38:15.136820 2583 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878"} Mar 14 00:38:15.136970 kubelet[2583]: E0314 00:38:15.136853 2583 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46df48d4-8ce4-4d83-97c1-d2d7b89d6608\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:38:15.136970 kubelet[2583]: E0314 00:38:15.136880 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46df48d4-8ce4-4d83-97c1-d2d7b89d6608\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hsmxv" podUID="46df48d4-8ce4-4d83-97c1-d2d7b89d6608" Mar 14 00:38:15.275767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979-shm.mount: Deactivated successfully. Mar 14 00:38:15.276122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153-shm.mount: Deactivated successfully. Mar 14 00:38:15.276399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b-shm.mount: Deactivated successfully. Mar 14 00:38:15.276969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78-shm.mount: Deactivated successfully. Mar 14 00:38:15.277185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a-shm.mount: Deactivated successfully. Mar 14 00:38:15.277412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878-shm.mount: Deactivated successfully. Mar 14 00:38:15.926644 kubelet[2583]: I0314 00:38:15.922777 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:38:15.927305 containerd[1451]: time="2026-03-14T00:38:15.924950966Z" level=info msg="StopPodSandbox for \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\"" Mar 14 00:38:15.927305 containerd[1451]: time="2026-03-14T00:38:15.925232799Z" level=info msg="Ensure that sandbox 6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec in task-service has been cleanup successfully" Mar 14 00:38:15.935835 kubelet[2583]: I0314 00:38:15.935779 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:38:15.943382 containerd[1451]: time="2026-03-14T00:38:15.942221571Z" level=info msg="StopPodSandbox for \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\"" Mar 14 00:38:15.953785 containerd[1451]: time="2026-03-14T00:38:15.953147563Z" level=info msg="Ensure that sandbox 798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153 in task-service has been cleanup successfully" Mar 14 00:38:15.955386 containerd[1451]: time="2026-03-14T00:38:15.955353250Z" level=info msg="StopPodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\"" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.306 [INFO][3964] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.306 [INFO][3964] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" iface="eth0" netns="/var/run/netns/cni-55b56542-c026-ba44-9855-67c76634855e" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.306 [INFO][3964] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" iface="eth0" netns="/var/run/netns/cni-55b56542-c026-ba44-9855-67c76634855e" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.309 [INFO][3964] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" iface="eth0" netns="/var/run/netns/cni-55b56542-c026-ba44-9855-67c76634855e" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.309 [INFO][3964] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.309 [INFO][3964] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.423 [INFO][4005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.424 [INFO][4005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.424 [INFO][4005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.443 [WARNING][4005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.443 [INFO][4005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.593 [INFO][4005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:16.621613 containerd[1451]: 2026-03-14 00:38:16.606 [INFO][3964] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:38:16.633720 containerd[1451]: time="2026-03-14T00:38:16.627738389Z" level=info msg="TearDown network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" successfully" Mar 14 00:38:16.633720 containerd[1451]: time="2026-03-14T00:38:16.627790233Z" level=info msg="StopPodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" returns successfully" Mar 14 00:38:16.637389 systemd[1]: run-netns-cni\x2d55b56542\x2dc026\x2dba44\x2d9855\x2d67c76634855e.mount: Deactivated successfully. Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.360 [INFO][3939] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.360 [INFO][3939] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" iface="eth0" netns="/var/run/netns/cni-06387528-2d6f-ffb5-f457-3fe951b14621" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.361 [INFO][3939] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" iface="eth0" netns="/var/run/netns/cni-06387528-2d6f-ffb5-f457-3fe951b14621" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.361 [INFO][3939] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" iface="eth0" netns="/var/run/netns/cni-06387528-2d6f-ffb5-f457-3fe951b14621" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.361 [INFO][3939] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.361 [INFO][3939] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.473 [INFO][4011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.475 [INFO][4011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.595 [INFO][4011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.655 [WARNING][4011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.656 [INFO][4011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.683 [INFO][4011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:16.706067 containerd[1451]: 2026-03-14 00:38:16.693 [INFO][3939] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:38:16.714828 systemd[1]: run-netns-cni\x2d06387528\x2d2d6f\x2dffb5\x2df457\x2d3fe951b14621.mount: Deactivated successfully. Mar 14 00:38:16.724626 containerd[1451]: time="2026-03-14T00:38:16.722478369Z" level=info msg="TearDown network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\" successfully" Mar 14 00:38:16.724626 containerd[1451]: time="2026-03-14T00:38:16.722522078Z" level=info msg="StopPodSandbox for \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\" returns successfully" Mar 14 00:38:16.733963 containerd[1451]: time="2026-03-14T00:38:16.732776321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cc94955b9-l2gbs,Uid:9e3a983d-b049-4edf-864f-a102bf11f3b8,Namespace:calico-system,Attempt:1,}" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.339 [INFO][3976] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.340 [INFO][3976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" iface="eth0" netns="/var/run/netns/cni-50d52f6c-4909-92a6-9bc9-d2ddc40eeef2" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.341 [INFO][3976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" iface="eth0" netns="/var/run/netns/cni-50d52f6c-4909-92a6-9bc9-d2ddc40eeef2" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.366 [INFO][3976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" iface="eth0" netns="/var/run/netns/cni-50d52f6c-4909-92a6-9bc9-d2ddc40eeef2" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.367 [INFO][3976] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.367 [INFO][3976] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.476 [INFO][4016] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.476 [INFO][4016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.688 [INFO][4016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.726 [WARNING][4016] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.726 [INFO][4016] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.733 [INFO][4016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:16.758631 containerd[1451]: 2026-03-14 00:38:16.752 [INFO][3976] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:38:16.760007 containerd[1451]: time="2026-03-14T00:38:16.759421509Z" level=info msg="TearDown network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\" successfully" Mar 14 00:38:16.760007 containerd[1451]: time="2026-03-14T00:38:16.759455512Z" level=info msg="StopPodSandbox for \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\" returns successfully" Mar 14 00:38:16.768316 systemd[1]: run-netns-cni\x2d50d52f6c\x2d4909\x2d92a6\x2d9bc9\x2dd2ddc40eeef2.mount: Deactivated successfully. Mar 14 00:38:16.773649 containerd[1451]: time="2026-03-14T00:38:16.773527244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-xvlsd,Uid:e17452f4-a642-40cd-ac57-08b53d428d2c,Namespace:calico-system,Attempt:1,}" Mar 14 00:38:16.830046 kubelet[2583]: I0314 00:38:16.829182 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-nginx-config\" (UniqueName: \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-nginx-config\") pod \"8a946f90-08bb-4be5-826c-db54ba31997f\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " Mar 14 00:38:16.830046 kubelet[2583]: I0314 00:38:16.829282 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/8a946f90-08bb-4be5-826c-db54ba31997f-kube-api-access-gbvgx\" (UniqueName: \"kubernetes.io/projected/8a946f90-08bb-4be5-826c-db54ba31997f-kube-api-access-gbvgx\") pod \"8a946f90-08bb-4be5-826c-db54ba31997f\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " Mar 14 00:38:16.830046 kubelet[2583]: I0314 00:38:16.829316 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-ca-bundle\") pod \"8a946f90-08bb-4be5-826c-db54ba31997f\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " Mar 14 00:38:16.830046 kubelet[2583]: I0314 00:38:16.829344 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-backend-key-pair\") pod \"8a946f90-08bb-4be5-826c-db54ba31997f\" (UID: \"8a946f90-08bb-4be5-826c-db54ba31997f\") " Mar 14 00:38:16.832143 kubelet[2583]: I0314 00:38:16.831335 2583 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-ca-bundle" pod "8a946f90-08bb-4be5-826c-db54ba31997f" (UID: "8a946f90-08bb-4be5-826c-db54ba31997f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:38:16.836848 kubelet[2583]: I0314 00:38:16.833145 2583 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-nginx-config" pod "8a946f90-08bb-4be5-826c-db54ba31997f" (UID: "8a946f90-08bb-4be5-826c-db54ba31997f"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:38:16.836848 kubelet[2583]: I0314 00:38:16.836716 2583 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-backend-key-pair" pod "8a946f90-08bb-4be5-826c-db54ba31997f" (UID: "8a946f90-08bb-4be5-826c-db54ba31997f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:38:16.838502 kubelet[2583]: I0314 00:38:16.838476 2583 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a946f90-08bb-4be5-826c-db54ba31997f-kube-api-access-gbvgx" pod "8a946f90-08bb-4be5-826c-db54ba31997f" (UID: "8a946f90-08bb-4be5-826c-db54ba31997f"). InnerVolumeSpecName "kube-api-access-gbvgx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:38:16.931449 kubelet[2583]: I0314 00:38:16.930412 2583 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 14 00:38:16.931449 kubelet[2583]: I0314 00:38:16.930450 2583 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a946f90-08bb-4be5-826c-db54ba31997f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 14 00:38:16.931449 kubelet[2583]: I0314 00:38:16.930466 2583 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8a946f90-08bb-4be5-826c-db54ba31997f-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 14 00:38:16.931449 kubelet[2583]: I0314 00:38:16.930479 2583 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gbvgx\" (UniqueName: \"kubernetes.io/projected/8a946f90-08bb-4be5-826c-db54ba31997f-kube-api-access-gbvgx\") on node \"localhost\" DevicePath \"\"" Mar 14 00:38:16.953687 systemd[1]: Removed slice kubepods-besteffort-pod8a946f90_08bb_4be5_826c_db54ba31997f.slice - libcontainer container kubepods-besteffort-pod8a946f90_08bb_4be5_826c_db54ba31997f.slice. Mar 14 00:38:17.221908 systemd[1]: Created slice kubepods-besteffort-pod65c00d65_fd87_4429_a403_6d5e42bbf0b6.slice - libcontainer container kubepods-besteffort-pod65c00d65_fd87_4429_a403_6d5e42bbf0b6.slice. Mar 14 00:38:17.336333 kubelet[2583]: I0314 00:38:17.334704 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65c00d65-fd87-4429-a403-6d5e42bbf0b6-whisker-ca-bundle\") pod \"whisker-76b548fd49-vhrh8\" (UID: \"65c00d65-fd87-4429-a403-6d5e42bbf0b6\") " pod="calico-system/whisker-76b548fd49-vhrh8" Mar 14 00:38:17.336333 kubelet[2583]: I0314 00:38:17.334942 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65c00d65-fd87-4429-a403-6d5e42bbf0b6-whisker-backend-key-pair\") pod \"whisker-76b548fd49-vhrh8\" (UID: \"65c00d65-fd87-4429-a403-6d5e42bbf0b6\") " pod="calico-system/whisker-76b548fd49-vhrh8" Mar 14 00:38:17.336333 kubelet[2583]: I0314 00:38:17.334978 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8hc\" (UniqueName: \"kubernetes.io/projected/65c00d65-fd87-4429-a403-6d5e42bbf0b6-kube-api-access-tv8hc\") pod \"whisker-76b548fd49-vhrh8\" (UID: \"65c00d65-fd87-4429-a403-6d5e42bbf0b6\") " pod="calico-system/whisker-76b548fd49-vhrh8" Mar 14 00:38:17.336333 kubelet[2583]: I0314 00:38:17.335007 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/65c00d65-fd87-4429-a403-6d5e42bbf0b6-nginx-config\") pod \"whisker-76b548fd49-vhrh8\" (UID: \"65c00d65-fd87-4429-a403-6d5e42bbf0b6\") " pod="calico-system/whisker-76b548fd49-vhrh8" Mar 14 00:38:17.355907 systemd-networkd[1380]: cali48eba595ef2: Link UP Mar 14 00:38:17.357935 systemd-networkd[1380]: cali48eba595ef2: Gained carrier Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:16.864 [ERROR][4043] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:16.925 [INFO][4043] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0 calico-kube-controllers-5cc94955b9- calico-system 9e3a983d-b049-4edf-864f-a102bf11f3b8 1022 0 2026-03-14 00:37:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cc94955b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5cc94955b9-l2gbs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali48eba595ef2 [] [] }} ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:16.925 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.007 [INFO][4072] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" HandleID="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.019 [INFO][4072] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" HandleID="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048d120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5cc94955b9-l2gbs", "timestamp":"2026-03-14 00:38:17.007132994 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000206dc0)} Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.021 [INFO][4072] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.021 [INFO][4072] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.022 [INFO][4072] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.033 [INFO][4072] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.100 [INFO][4072] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.130 [INFO][4072] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.139 [INFO][4072] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.173 [INFO][4072] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.173 [INFO][4072] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.192 [INFO][4072] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.225 [INFO][4072] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.288 [INFO][4072] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.288 [INFO][4072] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" host="localhost" Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.288 [INFO][4072] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:17.448023 containerd[1451]: 2026-03-14 00:38:17.288 [INFO][4072] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" HandleID="k8s-pod-network.c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.456055 containerd[1451]: 2026-03-14 00:38:17.295 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0", GenerateName:"calico-kube-controllers-5cc94955b9-", Namespace:"calico-system", SelfLink:"", UID:"9e3a983d-b049-4edf-864f-a102bf11f3b8", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cc94955b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5cc94955b9-l2gbs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48eba595ef2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:17.456055 containerd[1451]: 2026-03-14 00:38:17.296 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.456055 containerd[1451]: 2026-03-14 00:38:17.296 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48eba595ef2 ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.456055 containerd[1451]: 2026-03-14 00:38:17.360 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.456055 containerd[1451]: 2026-03-14 00:38:17.365 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0", GenerateName:"calico-kube-controllers-5cc94955b9-", Namespace:"calico-system", SelfLink:"", UID:"9e3a983d-b049-4edf-864f-a102bf11f3b8", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cc94955b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d", Pod:"calico-kube-controllers-5cc94955b9-l2gbs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48eba595ef2", MAC:"4e:1c:91:38:5e:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:17.456055 containerd[1451]: 2026-03-14 00:38:17.438 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d" Namespace="calico-system" Pod="calico-kube-controllers-5cc94955b9-l2gbs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:38:17.514100 containerd[1451]: time="2026-03-14T00:38:17.507635466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:17.514100 containerd[1451]: time="2026-03-14T00:38:17.507811307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:17.514100 containerd[1451]: time="2026-03-14T00:38:17.507856790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:17.514100 containerd[1451]: time="2026-03-14T00:38:17.508024737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:17.565666 containerd[1451]: time="2026-03-14T00:38:17.563964736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b548fd49-vhrh8,Uid:65c00d65-fd87-4429-a403-6d5e42bbf0b6,Namespace:calico-system,Attempt:0,}" Mar 14 00:38:17.598417 systemd[1]: Started cri-containerd-c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d.scope - libcontainer container c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d. Mar 14 00:38:17.651498 systemd[1]: var-lib-kubelet-pods-8a946f90\x2d08bb\x2d4be5\x2d826c\x2ddb54ba31997f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbvgx.mount: Deactivated successfully. Mar 14 00:38:17.651877 systemd[1]: var-lib-kubelet-pods-8a946f90\x2d08bb\x2d4be5\x2d826c\x2ddb54ba31997f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:38:17.655472 systemd-networkd[1380]: cali756615e1354: Link UP Mar 14 00:38:17.657539 systemd-networkd[1380]: cali756615e1354: Gained carrier Mar 14 00:38:17.692147 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:16.890 [ERROR][4056] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:16.925 [INFO][4056] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0 calico-apiserver-d4cbf978c- calico-system e17452f4-a642-40cd-ac57-08b53d428d2c 1023 0 2026-03-14 00:37:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d4cbf978c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d4cbf978c-xvlsd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali756615e1354 [] [] }} ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:16.925 [INFO][4056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.014 [INFO][4074] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" HandleID="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.070 [INFO][4074] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" HandleID="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-d4cbf978c-xvlsd", "timestamp":"2026-03-14 00:38:17.014369129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f8f20)} Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.070 [INFO][4074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.290 [INFO][4074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.295 [INFO][4074] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.348 [INFO][4074] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.403 [INFO][4074] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.470 [INFO][4074] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.480 [INFO][4074] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.489 [INFO][4074] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.489 [INFO][4074] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.501 [INFO][4074] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.538 [INFO][4074] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.623 [INFO][4074] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.623 [INFO][4074] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" host="localhost" Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.625 [INFO][4074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:17.717974 containerd[1451]: 2026-03-14 00:38:17.625 [INFO][4074] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" HandleID="k8s-pod-network.cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.720780 containerd[1451]: 2026-03-14 00:38:17.641 [INFO][4056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"e17452f4-a642-40cd-ac57-08b53d428d2c", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d4cbf978c-xvlsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali756615e1354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:17.720780 containerd[1451]: 2026-03-14 00:38:17.641 [INFO][4056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.720780 containerd[1451]: 2026-03-14 00:38:17.641 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali756615e1354 ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.720780 containerd[1451]: 2026-03-14 00:38:17.662 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.720780 containerd[1451]: 2026-03-14 00:38:17.663 [INFO][4056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"e17452f4-a642-40cd-ac57-08b53d428d2c", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e", Pod:"calico-apiserver-d4cbf978c-xvlsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali756615e1354", MAC:"aa:c7:8d:b2:ba:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:17.720780 containerd[1451]: 2026-03-14 00:38:17.708 [INFO][4056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-xvlsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:38:17.968107 containerd[1451]: time="2026-03-14T00:38:17.967818953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cc94955b9-l2gbs,Uid:9e3a983d-b049-4edf-864f-a102bf11f3b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d\"" Mar 14 00:38:17.973325 containerd[1451]: time="2026-03-14T00:38:17.972363207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:38:18.011744 containerd[1451]: time="2026-03-14T00:38:18.002947294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:18.011744 containerd[1451]: time="2026-03-14T00:38:18.003031347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:18.011744 containerd[1451]: time="2026-03-14T00:38:18.003048258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:18.011744 containerd[1451]: time="2026-03-14T00:38:18.003216265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:18.125848 systemd[1]: Started cri-containerd-cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e.scope - libcontainer container cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e. Mar 14 00:38:18.262913 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:18.418117 containerd[1451]: time="2026-03-14T00:38:18.418018249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-xvlsd,Uid:e17452f4-a642-40cd-ac57-08b53d428d2c,Namespace:calico-system,Attempt:1,} returns sandbox id \"cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e\"" Mar 14 00:38:18.589324 systemd-networkd[1380]: cali915fb996982: Link UP Mar 14 00:38:18.594866 systemd-networkd[1380]: cali48eba595ef2: Gained IPv6LL Mar 14 00:38:18.605352 systemd-networkd[1380]: cali915fb996982: Gained carrier Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:17.908 [ERROR][4133] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:17.990 [INFO][4133] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76b548fd49--vhrh8-eth0 whisker-76b548fd49- calico-system 65c00d65-fd87-4429-a403-6d5e42bbf0b6 1041 0 2026-03-14 00:38:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76b548fd49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76b548fd49-vhrh8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali915fb996982 [] [] }} ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:17.991 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.271 [INFO][4236] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" HandleID="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Workload="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.299 [INFO][4236] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" HandleID="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Workload="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000116310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76b548fd49-vhrh8", "timestamp":"2026-03-14 00:38:18.271257853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192000)} Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.299 [INFO][4236] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.299 [INFO][4236] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.299 [INFO][4236] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.319 [INFO][4236] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.396 [INFO][4236] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.483 [INFO][4236] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.499 [INFO][4236] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.511 [INFO][4236] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.511 [INFO][4236] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.516 [INFO][4236] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.532 [INFO][4236] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.565 [INFO][4236] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.566 [INFO][4236] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" host="localhost" Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.566 [INFO][4236] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:18.695938 containerd[1451]: 2026-03-14 00:38:18.566 [INFO][4236] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" HandleID="k8s-pod-network.93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Workload="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.707370 containerd[1451]: 2026-03-14 00:38:18.576 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76b548fd49--vhrh8-eth0", GenerateName:"whisker-76b548fd49-", Namespace:"calico-system", SelfLink:"", UID:"65c00d65-fd87-4429-a403-6d5e42bbf0b6", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76b548fd49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76b548fd49-vhrh8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali915fb996982", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:18.707370 containerd[1451]: 2026-03-14 00:38:18.576 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.707370 containerd[1451]: 2026-03-14 00:38:18.576 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali915fb996982 ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.707370 containerd[1451]: 2026-03-14 00:38:18.613 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.707370 containerd[1451]: 2026-03-14 00:38:18.614 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76b548fd49--vhrh8-eth0", GenerateName:"whisker-76b548fd49-", Namespace:"calico-system", SelfLink:"", UID:"65c00d65-fd87-4429-a403-6d5e42bbf0b6", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76b548fd49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c", Pod:"whisker-76b548fd49-vhrh8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali915fb996982", MAC:"3a:62:4c:86:c6:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:18.707370 containerd[1451]: 2026-03-14 00:38:18.686 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c" Namespace="calico-system" Pod="whisker-76b548fd49-vhrh8" WorkloadEndpoint="localhost-k8s-whisker--76b548fd49--vhrh8-eth0" Mar 14 00:38:18.864410 systemd-networkd[1380]: cali756615e1354: Gained IPv6LL Mar 14 00:38:18.874806 kubelet[2583]: I0314 00:38:18.874737 2583 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8a946f90-08bb-4be5-826c-db54ba31997f" path="/var/lib/kubelet/pods/8a946f90-08bb-4be5-826c-db54ba31997f/volumes" Mar 14 00:38:18.919862 containerd[1451]: time="2026-03-14T00:38:18.906470444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:18.919862 containerd[1451]: time="2026-03-14T00:38:18.906754904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:18.919862 containerd[1451]: time="2026-03-14T00:38:18.906771955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:18.919862 containerd[1451]: time="2026-03-14T00:38:18.906981709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:19.027913 systemd[1]: Started cri-containerd-93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c.scope - libcontainer container 93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c. Mar 14 00:38:19.104396 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:19.300916 containerd[1451]: time="2026-03-14T00:38:19.300799452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b548fd49-vhrh8,Uid:65c00d65-fd87-4429-a403-6d5e42bbf0b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c\"" Mar 14 00:38:19.427265 kernel: calico-node[4293]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:38:19.877743 systemd-networkd[1380]: cali915fb996982: Gained IPv6LL Mar 14 00:38:20.896229 systemd-networkd[1380]: vxlan.calico: Link UP Mar 14 00:38:20.896242 systemd-networkd[1380]: vxlan.calico: Gained carrier Mar 14 00:38:22.564155 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Mar 14 00:38:22.758195 containerd[1451]: time="2026-03-14T00:38:22.757632081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:22.768474 containerd[1451]: time="2026-03-14T00:38:22.767174732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:38:22.775420 containerd[1451]: time="2026-03-14T00:38:22.775332919Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:22.778962 containerd[1451]: time="2026-03-14T00:38:22.778904705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:22.780744 containerd[1451]: time="2026-03-14T00:38:22.780492244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.808078635s" Mar 14 00:38:22.780744 containerd[1451]: time="2026-03-14T00:38:22.780622333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:38:22.786448 containerd[1451]: time="2026-03-14T00:38:22.785968697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:38:22.815501 containerd[1451]: time="2026-03-14T00:38:22.815006728Z" level=info msg="CreateContainer within sandbox \"c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:38:22.865355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406129786.mount: Deactivated successfully. Mar 14 00:38:22.877272 containerd[1451]: time="2026-03-14T00:38:22.876545129Z" level=info msg="CreateContainer within sandbox \"c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9947fedb70bcf73b29f1f49d5b5ae0e942c8f3730592906674cba4f09f577aeb\"" Mar 14 00:38:22.878392 containerd[1451]: time="2026-03-14T00:38:22.878319882Z" level=info msg="StartContainer for \"9947fedb70bcf73b29f1f49d5b5ae0e942c8f3730592906674cba4f09f577aeb\"" Mar 14 00:38:22.970178 systemd[1]: Started cri-containerd-9947fedb70bcf73b29f1f49d5b5ae0e942c8f3730592906674cba4f09f577aeb.scope - libcontainer container 9947fedb70bcf73b29f1f49d5b5ae0e942c8f3730592906674cba4f09f577aeb. Mar 14 00:38:23.150967 containerd[1451]: time="2026-03-14T00:38:23.150785687Z" level=info msg="StartContainer for \"9947fedb70bcf73b29f1f49d5b5ae0e942c8f3730592906674cba4f09f577aeb\" returns successfully" Mar 14 00:38:24.229097 kubelet[2583]: I0314 00:38:24.228319 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cc94955b9-l2gbs" podStartSLOduration=43.41794197 podStartE2EDuration="48.228298765s" podCreationTimestamp="2026-03-14 00:37:36 +0000 UTC" firstStartedPulling="2026-03-14 00:38:17.971955863 +0000 UTC m=+63.612042959" lastFinishedPulling="2026-03-14 00:38:22.782312668 +0000 UTC m=+68.422399754" observedRunningTime="2026-03-14 00:38:24.227649018 +0000 UTC m=+69.867736124" watchObservedRunningTime="2026-03-14 00:38:24.228298765 +0000 UTC m=+69.868385881" Mar 14 00:38:25.803458 containerd[1451]: time="2026-03-14T00:38:25.801530002Z" level=info msg="StopPodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\"" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:25.931 [INFO][4585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:25.931 [INFO][4585] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" iface="eth0" netns="/var/run/netns/cni-a59834a7-4ef4-99ba-f670-74904bfd6adf" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:25.932 [INFO][4585] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" iface="eth0" netns="/var/run/netns/cni-a59834a7-4ef4-99ba-f670-74904bfd6adf" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:25.933 [INFO][4585] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" iface="eth0" netns="/var/run/netns/cni-a59834a7-4ef4-99ba-f670-74904bfd6adf" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:25.933 [INFO][4585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:25.933 [INFO][4585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.002 [INFO][4594] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.002 [INFO][4594] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.002 [INFO][4594] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.020 [WARNING][4594] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.021 [INFO][4594] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.036 [INFO][4594] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:26.050425 containerd[1451]: 2026-03-14 00:38:26.041 [INFO][4585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:38:26.051314 containerd[1451]: time="2026-03-14T00:38:26.050692729Z" level=info msg="TearDown network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" successfully" Mar 14 00:38:26.051314 containerd[1451]: time="2026-03-14T00:38:26.050805195Z" level=info msg="StopPodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" returns successfully" Mar 14 00:38:26.055406 systemd[1]: run-netns-cni\x2da59834a7\x2d4ef4\x2d99ba\x2df670\x2d74904bfd6adf.mount: Deactivated successfully. Mar 14 00:38:26.135359 containerd[1451]: time="2026-03-14T00:38:26.135312513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q92fd,Uid:72792cd6-748a-469c-b9e2-1b61caf289ee,Namespace:calico-system,Attempt:1,}" Mar 14 00:38:26.510848 containerd[1451]: time="2026-03-14T00:38:26.510800082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:26.518996 containerd[1451]: time="2026-03-14T00:38:26.518536853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:38:26.521305 containerd[1451]: time="2026-03-14T00:38:26.521114962Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:26.533500 containerd[1451]: time="2026-03-14T00:38:26.533211444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:26.535288 containerd[1451]: time="2026-03-14T00:38:26.535068679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.74891297s" Mar 14 00:38:26.535288 containerd[1451]: time="2026-03-14T00:38:26.535114874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:38:26.539108 containerd[1451]: time="2026-03-14T00:38:26.539043452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:38:26.546111 systemd-networkd[1380]: cali05a5dead07a: Link UP Mar 14 00:38:26.554862 systemd-networkd[1380]: cali05a5dead07a: Gained carrier Mar 14 00:38:26.567620 containerd[1451]: time="2026-03-14T00:38:26.567527589Z" level=info msg="CreateContainer within sandbox \"cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:38:26.623618 containerd[1451]: time="2026-03-14T00:38:26.623359788Z" level=info msg="CreateContainer within sandbox \"cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c8692cdb5a15c8f60f913029912e50b2c43f91a3ba55614a167bcb6abb55e40\"" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.282 [INFO][4602] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q92fd-eth0 csi-node-driver- calico-system 72792cd6-748a-469c-b9e2-1b61caf289ee 1081 0 2026-03-14 00:37:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q92fd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali05a5dead07a [] [] }} ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.283 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.334 [INFO][4617] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" HandleID="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.375 [INFO][4617] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" HandleID="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q92fd", "timestamp":"2026-03-14 00:38:26.3341775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005071e0)} Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.375 [INFO][4617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.375 [INFO][4617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.375 [INFO][4617] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.389 [INFO][4617] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.411 [INFO][4617] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.430 [INFO][4617] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.442 [INFO][4617] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.466 [INFO][4617] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.466 [INFO][4617] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.472 [INFO][4617] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1 Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.504 [INFO][4617] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.529 [INFO][4617] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.530 [INFO][4617] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" host="localhost" Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.530 [INFO][4617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:26.624817 containerd[1451]: 2026-03-14 00:38:26.530 [INFO][4617] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" HandleID="k8s-pod-network.c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.625701 containerd[1451]: 2026-03-14 00:38:26.534 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q92fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72792cd6-748a-469c-b9e2-1b61caf289ee", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q92fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05a5dead07a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:26.625701 containerd[1451]: 2026-03-14 00:38:26.537 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.625701 containerd[1451]: 2026-03-14 00:38:26.537 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05a5dead07a ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.625701 containerd[1451]: 2026-03-14 00:38:26.552 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.625701 containerd[1451]: 2026-03-14 00:38:26.566 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q92fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72792cd6-748a-469c-b9e2-1b61caf289ee", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1", Pod:"csi-node-driver-q92fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05a5dead07a", MAC:"5a:f3:53:a5:de:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:26.625701 containerd[1451]: 2026-03-14 00:38:26.600 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1" Namespace="calico-system" Pod="csi-node-driver-q92fd" WorkloadEndpoint="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:38:26.632219 containerd[1451]: time="2026-03-14T00:38:26.632073957Z" level=info msg="StartContainer for \"6c8692cdb5a15c8f60f913029912e50b2c43f91a3ba55614a167bcb6abb55e40\"" Mar 14 00:38:26.715469 containerd[1451]: time="2026-03-14T00:38:26.706506247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:26.715469 containerd[1451]: time="2026-03-14T00:38:26.706629553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:26.715469 containerd[1451]: time="2026-03-14T00:38:26.706666421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:26.715469 containerd[1451]: time="2026-03-14T00:38:26.707081725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:26.741795 systemd[1]: Started cri-containerd-6c8692cdb5a15c8f60f913029912e50b2c43f91a3ba55614a167bcb6abb55e40.scope - libcontainer container 6c8692cdb5a15c8f60f913029912e50b2c43f91a3ba55614a167bcb6abb55e40. Mar 14 00:38:26.780955 systemd[1]: Started cri-containerd-c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1.scope - libcontainer container c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1. Mar 14 00:38:26.827408 containerd[1451]: time="2026-03-14T00:38:26.819897564Z" level=info msg="StopPodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\"" Mar 14 00:38:26.912344 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:26.963293 containerd[1451]: time="2026-03-14T00:38:26.963244770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q92fd,Uid:72792cd6-748a-469c-b9e2-1b61caf289ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1\"" Mar 14 00:38:27.004038 containerd[1451]: time="2026-03-14T00:38:27.003865593Z" level=info msg="StartContainer for \"6c8692cdb5a15c8f60f913029912e50b2c43f91a3ba55614a167bcb6abb55e40\" returns successfully" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.109 [INFO][4724] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.110 [INFO][4724] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" iface="eth0" netns="/var/run/netns/cni-84cd9466-3721-2687-1b26-db6f314eecbd" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.110 [INFO][4724] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" iface="eth0" netns="/var/run/netns/cni-84cd9466-3721-2687-1b26-db6f314eecbd" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.111 [INFO][4724] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" iface="eth0" netns="/var/run/netns/cni-84cd9466-3721-2687-1b26-db6f314eecbd" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.111 [INFO][4724] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.111 [INFO][4724] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.170 [INFO][4758] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.180 [INFO][4758] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.180 [INFO][4758] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.196 [WARNING][4758] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.196 [INFO][4758] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.203 [INFO][4758] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:27.233867 containerd[1451]: 2026-03-14 00:38:27.220 [INFO][4724] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:38:27.238116 containerd[1451]: time="2026-03-14T00:38:27.236060294Z" level=info msg="TearDown network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" successfully" Mar 14 00:38:27.238116 containerd[1451]: time="2026-03-14T00:38:27.236102010Z" level=info msg="StopPodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" returns successfully" Mar 14 00:38:27.242138 systemd[1]: run-netns-cni\x2d84cd9466\x2d3721\x2d2687\x2d1b26\x2ddb6f314eecbd.mount: Deactivated successfully. Mar 14 00:38:27.278201 kubelet[2583]: E0314 00:38:27.277228 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:27.280737 containerd[1451]: time="2026-03-14T00:38:27.280238955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmzcq,Uid:1528cef9-ef7a-4b03-b27c-111acf337f79,Namespace:kube-system,Attempt:1,}" Mar 14 00:38:27.585490 systemd-networkd[1380]: calib0598fe0896: Link UP Mar 14 00:38:27.585953 systemd-networkd[1380]: calib0598fe0896: Gained carrier Mar 14 00:38:27.608439 kubelet[2583]: I0314 00:38:27.608347 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-d4cbf978c-xvlsd" podStartSLOduration=45.499138869 podStartE2EDuration="53.608330111s" podCreationTimestamp="2026-03-14 00:37:34 +0000 UTC" firstStartedPulling="2026-03-14 00:38:18.428542771 +0000 UTC m=+64.068629857" lastFinishedPulling="2026-03-14 00:38:26.537734014 +0000 UTC m=+72.177821099" observedRunningTime="2026-03-14 00:38:27.253877755 +0000 UTC m=+72.893964861" watchObservedRunningTime="2026-03-14 00:38:27.608330111 +0000 UTC m=+73.248417197" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.405 [INFO][4774] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--hmzcq-eth0 coredns-7d764666f9- kube-system 1528cef9-ef7a-4b03-b27c-111acf337f79 1094 0 2026-03-14 00:37:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-hmzcq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib0598fe0896 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.405 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.462 [INFO][4788] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" HandleID="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.480 [INFO][4788] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" HandleID="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-hmzcq", "timestamp":"2026-03-14 00:38:27.462765567 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a5080)} Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.480 [INFO][4788] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.480 [INFO][4788] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.480 [INFO][4788] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.492 [INFO][4788] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.505 [INFO][4788] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.519 [INFO][4788] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.527 [INFO][4788] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.534 [INFO][4788] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.534 [INFO][4788] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.541 [INFO][4788] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.558 [INFO][4788] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.575 [INFO][4788] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.575 [INFO][4788] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" host="localhost" Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.575 [INFO][4788] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:27.611437 containerd[1451]: 2026-03-14 00:38:27.575 [INFO][4788] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" HandleID="k8s-pod-network.4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.612149 containerd[1451]: 2026-03-14 00:38:27.579 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hmzcq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1528cef9-ef7a-4b03-b27c-111acf337f79", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-hmzcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0598fe0896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:27.612149 containerd[1451]: 2026-03-14 00:38:27.580 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.612149 containerd[1451]: 2026-03-14 00:38:27.580 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0598fe0896 ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.612149 containerd[1451]: 2026-03-14 00:38:27.587 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.612149 containerd[1451]: 2026-03-14 00:38:27.588 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hmzcq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1528cef9-ef7a-4b03-b27c-111acf337f79", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb", Pod:"coredns-7d764666f9-hmzcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0598fe0896", MAC:"5a:5c:c1:58:c7:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:27.612149 containerd[1451]: 2026-03-14 00:38:27.606 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb" Namespace="kube-system" Pod="coredns-7d764666f9-hmzcq" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:38:27.616518 containerd[1451]: time="2026-03-14T00:38:27.616303666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:27.617989 containerd[1451]: time="2026-03-14T00:38:27.617933374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:38:27.635161 containerd[1451]: time="2026-03-14T00:38:27.634987984Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:27.646619 containerd[1451]: time="2026-03-14T00:38:27.644985453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:27.646619 containerd[1451]: time="2026-03-14T00:38:27.646065799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.106978217s" Mar 14 00:38:27.646619 containerd[1451]: time="2026-03-14T00:38:27.646104230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:38:27.649635 containerd[1451]: time="2026-03-14T00:38:27.649538833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:38:27.657046 containerd[1451]: time="2026-03-14T00:38:27.657006682Z" level=info msg="CreateContainer within sandbox \"93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:38:27.665764 containerd[1451]: time="2026-03-14T00:38:27.665487633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:27.665764 containerd[1451]: time="2026-03-14T00:38:27.665537654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:27.665764 containerd[1451]: time="2026-03-14T00:38:27.665641826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:27.667028 containerd[1451]: time="2026-03-14T00:38:27.666631175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:27.687057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189282512.mount: Deactivated successfully. Mar 14 00:38:27.707903 containerd[1451]: time="2026-03-14T00:38:27.707810016Z" level=info msg="CreateContainer within sandbox \"93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"403279cfb5a7540d1ab22f03c13131a6e3329618598dfe0ac74e1fd5da71d3f4\"" Mar 14 00:38:27.710683 containerd[1451]: time="2026-03-14T00:38:27.710648795Z" level=info msg="StartContainer for \"403279cfb5a7540d1ab22f03c13131a6e3329618598dfe0ac74e1fd5da71d3f4\"" Mar 14 00:38:27.711866 systemd[1]: Started cri-containerd-4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb.scope - libcontainer container 4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb. Mar 14 00:38:27.742711 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:27.809023 systemd[1]: Started cri-containerd-403279cfb5a7540d1ab22f03c13131a6e3329618598dfe0ac74e1fd5da71d3f4.scope - libcontainer container 403279cfb5a7540d1ab22f03c13131a6e3329618598dfe0ac74e1fd5da71d3f4. Mar 14 00:38:27.812628 containerd[1451]: time="2026-03-14T00:38:27.812453425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmzcq,Uid:1528cef9-ef7a-4b03-b27c-111acf337f79,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb\"" Mar 14 00:38:27.815814 kubelet[2583]: E0314 00:38:27.815753 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:27.840728 containerd[1451]: time="2026-03-14T00:38:27.839116689Z" level=info msg="CreateContainer within sandbox \"4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:38:27.920231 containerd[1451]: time="2026-03-14T00:38:27.919810964Z" level=info msg="StartContainer for \"403279cfb5a7540d1ab22f03c13131a6e3329618598dfe0ac74e1fd5da71d3f4\" returns successfully" Mar 14 00:38:27.953195 containerd[1451]: time="2026-03-14T00:38:27.952494489Z" level=info msg="CreateContainer within sandbox \"4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c312c45f97661e1a0586e309d4a2a2a4314cf6045cea556081fac4c375e5a02\"" Mar 14 00:38:27.953947 containerd[1451]: time="2026-03-14T00:38:27.953864836Z" level=info msg="StartContainer for \"7c312c45f97661e1a0586e309d4a2a2a4314cf6045cea556081fac4c375e5a02\"" Mar 14 00:38:28.019711 systemd[1]: Started cri-containerd-7c312c45f97661e1a0586e309d4a2a2a4314cf6045cea556081fac4c375e5a02.scope - libcontainer container 7c312c45f97661e1a0586e309d4a2a2a4314cf6045cea556081fac4c375e5a02. Mar 14 00:38:28.139998 containerd[1451]: time="2026-03-14T00:38:28.139691895Z" level=info msg="StartContainer for \"7c312c45f97661e1a0586e309d4a2a2a4314cf6045cea556081fac4c375e5a02\" returns successfully" Mar 14 00:38:28.210206 kubelet[2583]: I0314 00:38:28.210129 2583 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:38:28.210683 kubelet[2583]: E0314 00:38:28.210613 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:28.240905 kubelet[2583]: I0314 00:38:28.240730 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hmzcq" podStartSLOduration=69.240711073 podStartE2EDuration="1m9.240711073s" podCreationTimestamp="2026-03-14 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:38:28.238853922 +0000 UTC m=+73.878941007" watchObservedRunningTime="2026-03-14 00:38:28.240711073 +0000 UTC m=+73.880798179" Mar 14 00:38:28.552937 containerd[1451]: time="2026-03-14T00:38:28.552837350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:28.558669 containerd[1451]: time="2026-03-14T00:38:28.558513832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:38:28.563768 containerd[1451]: time="2026-03-14T00:38:28.562433706Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:28.571801 containerd[1451]: time="2026-03-14T00:38:28.571303556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:28.573039 containerd[1451]: time="2026-03-14T00:38:28.572817891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 923.132248ms" Mar 14 00:38:28.573039 containerd[1451]: time="2026-03-14T00:38:28.572905613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:38:28.577349 containerd[1451]: time="2026-03-14T00:38:28.576687574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:38:28.578853 systemd-networkd[1380]: cali05a5dead07a: Gained IPv6LL Mar 14 00:38:28.591370 containerd[1451]: time="2026-03-14T00:38:28.590815187Z" level=info msg="CreateContainer within sandbox \"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:38:28.646321 containerd[1451]: time="2026-03-14T00:38:28.646235283Z" level=info msg="CreateContainer within sandbox \"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"034b64a12b59f86b7e8132d96f5a925dacbf9e05558d82dc83103b2f87e34e01\"" Mar 14 00:38:28.651525 containerd[1451]: time="2026-03-14T00:38:28.651311229Z" level=info msg="StartContainer for \"034b64a12b59f86b7e8132d96f5a925dacbf9e05558d82dc83103b2f87e34e01\"" Mar 14 00:38:28.726974 systemd[1]: Started cri-containerd-034b64a12b59f86b7e8132d96f5a925dacbf9e05558d82dc83103b2f87e34e01.scope - libcontainer container 034b64a12b59f86b7e8132d96f5a925dacbf9e05558d82dc83103b2f87e34e01. Mar 14 00:38:28.804784 containerd[1451]: time="2026-03-14T00:38:28.804505137Z" level=info msg="StopPodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\"" Mar 14 00:38:28.816046 containerd[1451]: time="2026-03-14T00:38:28.815930872Z" level=info msg="StopPodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\"" Mar 14 00:38:28.909445 containerd[1451]: time="2026-03-14T00:38:28.908285101Z" level=info msg="StartContainer for \"034b64a12b59f86b7e8132d96f5a925dacbf9e05558d82dc83103b2f87e34e01\" returns successfully" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.000 [INFO][5006] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.000 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" iface="eth0" netns="/var/run/netns/cni-4f452b0f-d6af-7a19-250a-ff77e2b21fc4" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.001 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" iface="eth0" netns="/var/run/netns/cni-4f452b0f-d6af-7a19-250a-ff77e2b21fc4" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.001 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" iface="eth0" netns="/var/run/netns/cni-4f452b0f-d6af-7a19-250a-ff77e2b21fc4" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.002 [INFO][5006] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.002 [INFO][5006] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.067 [INFO][5022] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.068 [INFO][5022] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.068 [INFO][5022] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.081 [WARNING][5022] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.081 [INFO][5022] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.103 [INFO][5022] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:29.120712 containerd[1451]: 2026-03-14 00:38:29.110 [INFO][5006] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:38:29.121728 containerd[1451]: time="2026-03-14T00:38:29.121498949Z" level=info msg="TearDown network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" successfully" Mar 14 00:38:29.121908 containerd[1451]: time="2026-03-14T00:38:29.121763857Z" level=info msg="StopPodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" returns successfully" Mar 14 00:38:29.128046 systemd[1]: run-netns-cni\x2d4f452b0f\x2dd6af\x2d7a19\x2d250a\x2dff77e2b21fc4.mount: Deactivated successfully. Mar 14 00:38:29.131086 kubelet[2583]: E0314 00:38:29.130123 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:29.132034 containerd[1451]: time="2026-03-14T00:38:29.131625707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hsmxv,Uid:46df48d4-8ce4-4d83-97c1-d2d7b89d6608,Namespace:kube-system,Attempt:1,}" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.012 [INFO][5005] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.012 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" iface="eth0" netns="/var/run/netns/cni-8366bcfd-d571-0efb-f2e0-4df90a315207" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.017 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" iface="eth0" netns="/var/run/netns/cni-8366bcfd-d571-0efb-f2e0-4df90a315207" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.018 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" iface="eth0" netns="/var/run/netns/cni-8366bcfd-d571-0efb-f2e0-4df90a315207" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.019 [INFO][5005] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.019 [INFO][5005] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.095 [INFO][5028] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.095 [INFO][5028] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.103 [INFO][5028] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.116 [WARNING][5028] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.116 [INFO][5028] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.123 [INFO][5028] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:29.138892 containerd[1451]: 2026-03-14 00:38:29.132 [INFO][5005] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:38:29.140222 containerd[1451]: time="2026-03-14T00:38:29.139937572Z" level=info msg="TearDown network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" successfully" Mar 14 00:38:29.140222 containerd[1451]: time="2026-03-14T00:38:29.139978827Z" level=info msg="StopPodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" returns successfully" Mar 14 00:38:29.151210 containerd[1451]: time="2026-03-14T00:38:29.151164844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d6kbv,Uid:3444a880-39b3-4cba-abbf-267ebeaaa2fc,Namespace:calico-system,Attempt:1,}" Mar 14 00:38:29.155025 systemd-networkd[1380]: calib0598fe0896: Gained IPv6LL Mar 14 00:38:29.157100 systemd[1]: run-netns-cni\x2d8366bcfd\x2dd571\x2d0efb\x2df2e0\x2d4df90a315207.mount: Deactivated successfully. Mar 14 00:38:29.224203 kubelet[2583]: E0314 00:38:29.224133 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:29.769226 systemd-networkd[1380]: calia37fee2ae57: Link UP Mar 14 00:38:29.770163 systemd-networkd[1380]: calia37fee2ae57: Gained carrier Mar 14 00:38:29.813863 containerd[1451]: time="2026-03-14T00:38:29.812115362Z" level=info msg="StopPodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\"" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.365 [INFO][5042] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--hsmxv-eth0 coredns-7d764666f9- kube-system 46df48d4-8ce4-4d83-97c1-d2d7b89d6608 1124 0 2026-03-14 00:37:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-hsmxv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia37fee2ae57 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.366 [INFO][5042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.510 [INFO][5071] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" HandleID="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.534 [INFO][5071] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" HandleID="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-hsmxv", "timestamp":"2026-03-14 00:38:29.510423267 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005c2c60)} Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.534 [INFO][5071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.535 [INFO][5071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.535 [INFO][5071] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.542 [INFO][5071] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.560 [INFO][5071] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.584 [INFO][5071] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.590 [INFO][5071] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.599 [INFO][5071] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.600 [INFO][5071] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.607 [INFO][5071] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559 Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.667 [INFO][5071] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.731 [INFO][5071] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.731 [INFO][5071] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" host="localhost" Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.731 [INFO][5071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:29.862396 containerd[1451]: 2026-03-14 00:38:29.731 [INFO][5071] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" HandleID="k8s-pod-network.265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.864744 containerd[1451]: 2026-03-14 00:38:29.753 [INFO][5042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hsmxv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"46df48d4-8ce4-4d83-97c1-d2d7b89d6608", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-hsmxv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia37fee2ae57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:29.864744 containerd[1451]: 2026-03-14 00:38:29.753 [INFO][5042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.864744 containerd[1451]: 2026-03-14 00:38:29.753 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia37fee2ae57 ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.864744 containerd[1451]: 2026-03-14 00:38:29.759 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.864744 containerd[1451]: 2026-03-14 00:38:29.760 [INFO][5042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hsmxv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"46df48d4-8ce4-4d83-97c1-d2d7b89d6608", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559", Pod:"coredns-7d764666f9-hsmxv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia37fee2ae57", MAC:"3e:3a:aa:7e:08:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:29.864744 containerd[1451]: 2026-03-14 00:38:29.849 [INFO][5042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559" Namespace="kube-system" Pod="coredns-7d764666f9-hsmxv" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:38:29.920111 systemd-networkd[1380]: cali673bd7f25cd: Link UP Mar 14 00:38:29.922732 systemd-networkd[1380]: cali673bd7f25cd: Gained carrier Mar 14 00:38:29.942117 containerd[1451]: time="2026-03-14T00:38:29.941874978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:29.943944 containerd[1451]: time="2026-03-14T00:38:29.943466366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:29.943944 containerd[1451]: time="2026-03-14T00:38:29.943497324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:29.943944 containerd[1451]: time="2026-03-14T00:38:29.943883174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.439 [INFO][5055] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0 goldmane-9f7667bb8- calico-system 3444a880-39b3-4cba-abbf-267ebeaaa2fc 1125 0 2026-03-14 00:37:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-d6kbv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali673bd7f25cd [] [] }} ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.440 [INFO][5055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.598 [INFO][5078] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" HandleID="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.634 [INFO][5078] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" HandleID="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-d6kbv", "timestamp":"2026-03-14 00:38:29.598989668 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a69a0)} Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.635 [INFO][5078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.731 [INFO][5078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.731 [INFO][5078] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.794 [INFO][5078] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.825 [INFO][5078] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.841 [INFO][5078] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.847 [INFO][5078] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.852 [INFO][5078] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.852 [INFO][5078] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.858 [INFO][5078] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6 Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.875 [INFO][5078] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.900 [INFO][5078] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.900 [INFO][5078] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" host="localhost" Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.900 [INFO][5078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:29.974722 containerd[1451]: 2026-03-14 00:38:29.900 [INFO][5078] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" HandleID="k8s-pod-network.a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.976108 containerd[1451]: 2026-03-14 00:38:29.917 [INFO][5055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"3444a880-39b3-4cba-abbf-267ebeaaa2fc", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-d6kbv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673bd7f25cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:29.976108 containerd[1451]: 2026-03-14 00:38:29.917 [INFO][5055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.976108 containerd[1451]: 2026-03-14 00:38:29.917 [INFO][5055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali673bd7f25cd ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.976108 containerd[1451]: 2026-03-14 00:38:29.921 [INFO][5055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:29.976108 containerd[1451]: 2026-03-14 00:38:29.922 [INFO][5055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"3444a880-39b3-4cba-abbf-267ebeaaa2fc", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6", Pod:"goldmane-9f7667bb8-d6kbv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673bd7f25cd", MAC:"1a:42:54:c4:ff:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:29.976108 containerd[1451]: 2026-03-14 00:38:29.963 [INFO][5055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6" Namespace="calico-system" Pod="goldmane-9f7667bb8-d6kbv" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:38:30.002816 systemd[1]: Started cri-containerd-265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559.scope - libcontainer container 265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559. Mar 14 00:38:30.039406 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:30.082001 containerd[1451]: time="2026-03-14T00:38:30.079491949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:30.082001 containerd[1451]: time="2026-03-14T00:38:30.079675397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:30.082001 containerd[1451]: time="2026-03-14T00:38:30.079698399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:30.082001 containerd[1451]: time="2026-03-14T00:38:30.079909859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.010 [INFO][5110] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.010 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" iface="eth0" netns="/var/run/netns/cni-2ef5ffc5-0fb1-07ef-14c3-a27ab12ca4a2" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.011 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" iface="eth0" netns="/var/run/netns/cni-2ef5ffc5-0fb1-07ef-14c3-a27ab12ca4a2" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.011 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" iface="eth0" netns="/var/run/netns/cni-2ef5ffc5-0fb1-07ef-14c3-a27ab12ca4a2" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.011 [INFO][5110] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.011 [INFO][5110] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.090 [INFO][5172] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.090 [INFO][5172] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.090 [INFO][5172] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.106 [WARNING][5172] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.109 [INFO][5172] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.115 [INFO][5172] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:30.134649 containerd[1451]: 2026-03-14 00:38:30.119 [INFO][5110] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:38:30.135720 containerd[1451]: time="2026-03-14T00:38:30.135322498Z" level=info msg="TearDown network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" successfully" Mar 14 00:38:30.135720 containerd[1451]: time="2026-03-14T00:38:30.135366839Z" level=info msg="StopPodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" returns successfully" Mar 14 00:38:30.141265 containerd[1451]: time="2026-03-14T00:38:30.136732243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hsmxv,Uid:46df48d4-8ce4-4d83-97c1-d2d7b89d6608,Namespace:kube-system,Attempt:1,} returns sandbox id \"265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559\"" Mar 14 00:38:30.140014 systemd[1]: Started cri-containerd-a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6.scope - libcontainer container a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6. Mar 14 00:38:30.142654 kubelet[2583]: E0314 00:38:30.141817 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:30.148279 containerd[1451]: time="2026-03-14T00:38:30.147175939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-p9t45,Uid:f5b883cf-5732-4d59-9bf1-5e7701804c52,Namespace:calico-system,Attempt:1,}" Mar 14 00:38:30.152674 systemd[1]: run-netns-cni\x2d2ef5ffc5\x2d0fb1\x2d07ef\x2d14c3\x2da27ab12ca4a2.mount: Deactivated successfully. Mar 14 00:38:30.155326 containerd[1451]: time="2026-03-14T00:38:30.155251750Z" level=info msg="CreateContainer within sandbox \"265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:38:30.223915 containerd[1451]: time="2026-03-14T00:38:30.223793277Z" level=info msg="CreateContainer within sandbox \"265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfca2f5012296e29e685ca4ff6adc2b70650d20cf365a38b3d8a5df591b44006\"" Mar 14 00:38:30.227196 containerd[1451]: time="2026-03-14T00:38:30.225643404Z" level=info msg="StartContainer for \"cfca2f5012296e29e685ca4ff6adc2b70650d20cf365a38b3d8a5df591b44006\"" Mar 14 00:38:30.232166 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:30.237792 kubelet[2583]: E0314 00:38:30.237705 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:30.336310 systemd[1]: Started cri-containerd-cfca2f5012296e29e685ca4ff6adc2b70650d20cf365a38b3d8a5df591b44006.scope - libcontainer container cfca2f5012296e29e685ca4ff6adc2b70650d20cf365a38b3d8a5df591b44006. Mar 14 00:38:30.347545 containerd[1451]: time="2026-03-14T00:38:30.346481479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d6kbv,Uid:3444a880-39b3-4cba-abbf-267ebeaaa2fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6\"" Mar 14 00:38:30.421344 containerd[1451]: time="2026-03-14T00:38:30.421013643Z" level=info msg="StartContainer for \"cfca2f5012296e29e685ca4ff6adc2b70650d20cf365a38b3d8a5df591b44006\" returns successfully" Mar 14 00:38:30.605543 systemd-networkd[1380]: calidece041a1c5: Link UP Mar 14 00:38:30.607524 systemd-networkd[1380]: calidece041a1c5: Gained carrier Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.329 [INFO][5235] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0 calico-apiserver-d4cbf978c- calico-system f5b883cf-5732-4d59-9bf1-5e7701804c52 1140 0 2026-03-14 00:37:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d4cbf978c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d4cbf978c-p9t45 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidece041a1c5 [] [] }} ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.330 [INFO][5235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.464 [INFO][5282] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" HandleID="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.480 [INFO][5282] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" HandleID="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000490c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-d4cbf978c-p9t45", "timestamp":"2026-03-14 00:38:30.464791389 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002dac60)} Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.480 [INFO][5282] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.480 [INFO][5282] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.480 [INFO][5282] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.488 [INFO][5282] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.514 [INFO][5282] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.532 [INFO][5282] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.536 [INFO][5282] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.541 [INFO][5282] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.541 [INFO][5282] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.549 [INFO][5282] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796 Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.562 [INFO][5282] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.587 [INFO][5282] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.587 [INFO][5282] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" host="localhost" Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.587 [INFO][5282] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:38:30.639072 containerd[1451]: 2026-03-14 00:38:30.587 [INFO][5282] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" HandleID="k8s-pod-network.eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.640200 containerd[1451]: 2026-03-14 00:38:30.598 [INFO][5235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"f5b883cf-5732-4d59-9bf1-5e7701804c52", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d4cbf978c-p9t45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidece041a1c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:30.640200 containerd[1451]: 2026-03-14 00:38:30.598 [INFO][5235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.640200 containerd[1451]: 2026-03-14 00:38:30.598 [INFO][5235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidece041a1c5 ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.640200 containerd[1451]: 2026-03-14 00:38:30.607 [INFO][5235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.640200 containerd[1451]: 2026-03-14 00:38:30.608 [INFO][5235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"f5b883cf-5732-4d59-9bf1-5e7701804c52", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796", Pod:"calico-apiserver-d4cbf978c-p9t45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidece041a1c5", MAC:"1e:e0:e5:9a:76:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:38:30.640200 containerd[1451]: 2026-03-14 00:38:30.634 [INFO][5235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796" Namespace="calico-system" Pod="calico-apiserver-d4cbf978c-p9t45" WorkloadEndpoint="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:38:30.756425 containerd[1451]: time="2026-03-14T00:38:30.753648175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:30.756425 containerd[1451]: time="2026-03-14T00:38:30.753735015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:30.756425 containerd[1451]: time="2026-03-14T00:38:30.753758067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:30.756425 containerd[1451]: time="2026-03-14T00:38:30.753924614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:30.792113 containerd[1451]: time="2026-03-14T00:38:30.790881755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:30.795524 containerd[1451]: time="2026-03-14T00:38:30.795095093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:38:30.801531 containerd[1451]: time="2026-03-14T00:38:30.799256837Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:30.806793 containerd[1451]: time="2026-03-14T00:38:30.806543033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:30.808955 containerd[1451]: time="2026-03-14T00:38:30.808265925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.231532387s" Mar 14 00:38:30.808955 containerd[1451]: time="2026-03-14T00:38:30.808360309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:38:30.820286 systemd[1]: Started cri-containerd-eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796.scope - libcontainer container eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796. Mar 14 00:38:30.832751 containerd[1451]: time="2026-03-14T00:38:30.831148701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:38:30.853046 containerd[1451]: time="2026-03-14T00:38:30.852747531Z" level=info msg="CreateContainer within sandbox \"93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:38:30.878845 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:30.985283 containerd[1451]: time="2026-03-14T00:38:30.985170132Z" level=info msg="CreateContainer within sandbox \"93946db23791a40943e74b8473301b748d0664b229173c467c4e9aaa2248590c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e9e3f7d7edbc384e16c49c85c39096d732f211403c892147c4dcbd39c7e516d0\"" Mar 14 00:38:30.998118 containerd[1451]: time="2026-03-14T00:38:30.995619539Z" level=info msg="StartContainer for \"e9e3f7d7edbc384e16c49c85c39096d732f211403c892147c4dcbd39c7e516d0\"" Mar 14 00:38:31.027608 containerd[1451]: time="2026-03-14T00:38:31.027264796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4cbf978c-p9t45,Uid:f5b883cf-5732-4d59-9bf1-5e7701804c52,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796\"" Mar 14 00:38:31.045223 containerd[1451]: time="2026-03-14T00:38:31.045134968Z" level=info msg="CreateContainer within sandbox \"eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:38:31.064210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749406962.mount: Deactivated successfully. Mar 14 00:38:31.102735 containerd[1451]: time="2026-03-14T00:38:31.101346245Z" level=info msg="CreateContainer within sandbox \"eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83914da3e7cbd074d5fe6cfa15da45540ee92937dda458cf2e034fc470189009\"" Mar 14 00:38:31.156121 systemd-networkd[1380]: cali673bd7f25cd: Gained IPv6LL Mar 14 00:38:31.174891 containerd[1451]: time="2026-03-14T00:38:31.174844181Z" level=info msg="StartContainer for \"83914da3e7cbd074d5fe6cfa15da45540ee92937dda458cf2e034fc470189009\"" Mar 14 00:38:31.213436 systemd[1]: Started cri-containerd-e9e3f7d7edbc384e16c49c85c39096d732f211403c892147c4dcbd39c7e516d0.scope - libcontainer container e9e3f7d7edbc384e16c49c85c39096d732f211403c892147c4dcbd39c7e516d0. Mar 14 00:38:31.306727 kubelet[2583]: E0314 00:38:31.306694 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:31.501725 kubelet[2583]: E0314 00:38:31.498996 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:31.510028 systemd[1]: Started cri-containerd-83914da3e7cbd074d5fe6cfa15da45540ee92937dda458cf2e034fc470189009.scope - libcontainer container 83914da3e7cbd074d5fe6cfa15da45540ee92937dda458cf2e034fc470189009. Mar 14 00:38:31.611996 containerd[1451]: time="2026-03-14T00:38:31.611771082Z" level=info msg="StartContainer for \"e9e3f7d7edbc384e16c49c85c39096d732f211403c892147c4dcbd39c7e516d0\" returns successfully" Mar 14 00:38:31.780293 systemd-networkd[1380]: calia37fee2ae57: Gained IPv6LL Mar 14 00:38:31.789626 containerd[1451]: time="2026-03-14T00:38:31.789412676Z" level=info msg="StartContainer for \"83914da3e7cbd074d5fe6cfa15da45540ee92937dda458cf2e034fc470189009\" returns successfully" Mar 14 00:38:31.849236 systemd-networkd[1380]: calidece041a1c5: Gained IPv6LL Mar 14 00:38:32.062197 systemd[1]: run-containerd-runc-k8s.io-83914da3e7cbd074d5fe6cfa15da45540ee92937dda458cf2e034fc470189009-runc.mDYNcA.mount: Deactivated successfully. Mar 14 00:38:32.528487 kubelet[2583]: E0314 00:38:32.528009 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:32.564910 kubelet[2583]: I0314 00:38:32.561543 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hsmxv" podStartSLOduration=73.561527196 podStartE2EDuration="1m13.561527196s" podCreationTimestamp="2026-03-14 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:38:31.56029491 +0000 UTC m=+77.200382026" watchObservedRunningTime="2026-03-14 00:38:32.561527196 +0000 UTC m=+78.201614302" Mar 14 00:38:32.662901 kubelet[2583]: I0314 00:38:32.660698 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-d4cbf978c-p9t45" podStartSLOduration=58.660678654 podStartE2EDuration="58.660678654s" podCreationTimestamp="2026-03-14 00:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:38:32.570144219 +0000 UTC m=+78.210231316" watchObservedRunningTime="2026-03-14 00:38:32.660678654 +0000 UTC m=+78.300765750" Mar 14 00:38:33.529182 kubelet[2583]: I0314 00:38:33.529118 2583 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:38:33.891282 containerd[1451]: time="2026-03-14T00:38:33.887428431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:33.891282 containerd[1451]: time="2026-03-14T00:38:33.889973263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:38:33.894926 containerd[1451]: time="2026-03-14T00:38:33.894158862Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:33.901243 containerd[1451]: time="2026-03-14T00:38:33.901060250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:33.903282 containerd[1451]: time="2026-03-14T00:38:33.903241562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.072025106s" Mar 14 00:38:33.903538 containerd[1451]: time="2026-03-14T00:38:33.903506010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:38:33.916346 containerd[1451]: time="2026-03-14T00:38:33.915384049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:38:33.940217 containerd[1451]: time="2026-03-14T00:38:33.939990225Z" level=info msg="CreateContainer within sandbox \"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:38:34.019811 containerd[1451]: time="2026-03-14T00:38:34.019507459Z" level=info msg="CreateContainer within sandbox \"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e200590cb2535382cad85552a6ba8fc6cbfdc7a5dc10d66cf46df5b36229de1d\"" Mar 14 00:38:34.022888 containerd[1451]: time="2026-03-14T00:38:34.022235245Z" level=info msg="StartContainer for \"e200590cb2535382cad85552a6ba8fc6cbfdc7a5dc10d66cf46df5b36229de1d\"" Mar 14 00:38:34.222096 systemd[1]: Started cri-containerd-e200590cb2535382cad85552a6ba8fc6cbfdc7a5dc10d66cf46df5b36229de1d.scope - libcontainer container e200590cb2535382cad85552a6ba8fc6cbfdc7a5dc10d66cf46df5b36229de1d. Mar 14 00:38:34.404419 containerd[1451]: time="2026-03-14T00:38:34.403888692Z" level=info msg="StartContainer for \"e200590cb2535382cad85552a6ba8fc6cbfdc7a5dc10d66cf46df5b36229de1d\" returns successfully" Mar 14 00:38:34.615240 kubelet[2583]: I0314 00:38:34.610954 2583 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:38:34.615240 kubelet[2583]: I0314 00:38:34.610992 2583 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:38:34.620812 kubelet[2583]: I0314 00:38:34.618308 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-q92fd" podStartSLOduration=51.678355808 podStartE2EDuration="58.618293608s" podCreationTimestamp="2026-03-14 00:37:36 +0000 UTC" firstStartedPulling="2026-03-14 00:38:26.969334257 +0000 UTC m=+72.609421344" lastFinishedPulling="2026-03-14 00:38:33.909272058 +0000 UTC m=+79.549359144" observedRunningTime="2026-03-14 00:38:34.609728225 +0000 UTC m=+80.249815331" watchObservedRunningTime="2026-03-14 00:38:34.618293608 +0000 UTC m=+80.258380694" Mar 14 00:38:34.620812 kubelet[2583]: I0314 00:38:34.618855 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-76b548fd49-vhrh8" podStartSLOduration=6.111864064 podStartE2EDuration="17.618846178s" podCreationTimestamp="2026-03-14 00:38:17 +0000 UTC" firstStartedPulling="2026-03-14 00:38:19.304804864 +0000 UTC m=+64.944891950" lastFinishedPulling="2026-03-14 00:38:30.811786968 +0000 UTC m=+76.451874064" observedRunningTime="2026-03-14 00:38:32.662366965 +0000 UTC m=+78.302454071" watchObservedRunningTime="2026-03-14 00:38:34.618846178 +0000 UTC m=+80.258933264" Mar 14 00:38:35.798636 kubelet[2583]: E0314 00:38:35.798309 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:36.643158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606088817.mount: Deactivated successfully. Mar 14 00:38:37.548013 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:53594.service - OpenSSH per-connection server daemon (10.0.0.1:53594). Mar 14 00:38:37.688121 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 53594 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:38:37.695184 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:38:37.763123 systemd-logind[1437]: New session 8 of user core. Mar 14 00:38:37.773964 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:38:37.796719 containerd[1451]: time="2026-03-14T00:38:37.796543209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:37.799673 containerd[1451]: time="2026-03-14T00:38:37.799094211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:38:37.801902 containerd[1451]: time="2026-03-14T00:38:37.801773925Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:37.806816 containerd[1451]: time="2026-03-14T00:38:37.805926373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:37.807339 containerd[1451]: time="2026-03-14T00:38:37.807274617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.891838702s" Mar 14 00:38:37.807339 containerd[1451]: time="2026-03-14T00:38:37.807329397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:38:37.849319 kubelet[2583]: I0314 00:38:37.849251 2583 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:38:37.851135 containerd[1451]: time="2026-03-14T00:38:37.851052759Z" level=info msg="CreateContainer within sandbox \"a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:38:37.889756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53964809.mount: Deactivated successfully. Mar 14 00:38:37.903348 containerd[1451]: time="2026-03-14T00:38:37.903165875Z" level=info msg="CreateContainer within sandbox \"a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fdd610747ec02817319e12996fec9a36222da4ea8529c4f826ccb640c880bdc7\"" Mar 14 00:38:37.905003 containerd[1451]: time="2026-03-14T00:38:37.904855704Z" level=info msg="StartContainer for \"fdd610747ec02817319e12996fec9a36222da4ea8529c4f826ccb640c880bdc7\"" Mar 14 00:38:37.999091 systemd[1]: Started cri-containerd-fdd610747ec02817319e12996fec9a36222da4ea8529c4f826ccb640c880bdc7.scope - libcontainer container fdd610747ec02817319e12996fec9a36222da4ea8529c4f826ccb640c880bdc7. Mar 14 00:38:38.084769 containerd[1451]: time="2026-03-14T00:38:38.084387811Z" level=info msg="StartContainer for \"fdd610747ec02817319e12996fec9a36222da4ea8529c4f826ccb640c880bdc7\" returns successfully" Mar 14 00:38:38.564317 sshd[5548]: pam_unix(sshd:session): session closed for user core Mar 14 00:38:38.575257 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:53594.service: Deactivated successfully. Mar 14 00:38:38.581033 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:38:38.586207 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:38:38.590269 systemd-logind[1437]: Removed session 8. Mar 14 00:38:38.627987 kubelet[2583]: I0314 00:38:38.626936 2583 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-d6kbv" podStartSLOduration=57.143162104 podStartE2EDuration="1m4.626915586s" podCreationTimestamp="2026-03-14 00:37:34 +0000 UTC" firstStartedPulling="2026-03-14 00:38:30.352604959 +0000 UTC m=+75.992692045" lastFinishedPulling="2026-03-14 00:38:37.836358441 +0000 UTC m=+83.476445527" observedRunningTime="2026-03-14 00:38:38.624952253 +0000 UTC m=+84.265039338" watchObservedRunningTime="2026-03-14 00:38:38.626915586 +0000 UTC m=+84.267002691" Mar 14 00:38:43.592206 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:46590.service - OpenSSH per-connection server daemon (10.0.0.1:46590). Mar 14 00:38:43.654935 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 46590 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:38:43.656855 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:38:43.664763 systemd-logind[1437]: New session 9 of user core. Mar 14 00:38:43.677950 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:38:43.897975 update_engine[1439]: I20260314 00:38:43.897457 1439 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 14 00:38:43.897975 update_engine[1439]: I20260314 00:38:43.897650 1439 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 14 00:38:43.899044 sshd[5673]: pam_unix(sshd:session): session closed for user core Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.901352 1439 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.902184 1439 omaha_request_params.cc:62] Current group set to lts Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.902360 1439 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.902380 1439 update_attempter.cc:643] Scheduling an action processor start. Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.902467 1439 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.902529 1439 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.903177 1439 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.903194 1439 omaha_request_action.cc:272] Request: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: Mar 14 00:38:43.904841 update_engine[1439]: I20260314 00:38:43.903203 1439 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:38:43.908986 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:46590.service: Deactivated successfully. Mar 14 00:38:43.912755 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:38:43.915067 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:38:43.917141 update_engine[1439]: I20260314 00:38:43.916463 1439 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:38:43.917141 update_engine[1439]: I20260314 00:38:43.917047 1439 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:38:43.922019 systemd-logind[1437]: Removed session 9. Mar 14 00:38:43.924775 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 14 00:38:43.935872 update_engine[1439]: E20260314 00:38:43.935740 1439 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:38:43.935979 update_engine[1439]: I20260314 00:38:43.935915 1439 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 14 00:38:44.800931 kubelet[2583]: E0314 00:38:44.799743 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:48.551333 systemd[1]: run-containerd-runc-k8s.io-fdd610747ec02817319e12996fec9a36222da4ea8529c4f826ccb640c880bdc7-runc.bRxyuT.mount: Deactivated successfully. Mar 14 00:38:48.921632 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:46600.service - OpenSSH per-connection server daemon (10.0.0.1:46600). Mar 14 00:38:49.048150 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 46600 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:38:49.052050 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:38:49.062805 systemd-logind[1437]: New session 10 of user core. Mar 14 00:38:49.077864 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:38:49.349064 sshd[5734]: pam_unix(sshd:session): session closed for user core Mar 14 00:38:49.359232 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:46600.service: Deactivated successfully. Mar 14 00:38:49.364759 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:38:49.366285 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:38:49.369736 systemd-logind[1437]: Removed session 10. Mar 14 00:38:51.803682 kubelet[2583]: E0314 00:38:51.802042 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:53.867750 update_engine[1439]: I20260314 00:38:53.866461 1439 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:38:53.867750 update_engine[1439]: I20260314 00:38:53.867722 1439 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:38:53.868311 update_engine[1439]: I20260314 00:38:53.868031 1439 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:38:53.897307 update_engine[1439]: E20260314 00:38:53.896855 1439 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:38:53.897307 update_engine[1439]: I20260314 00:38:53.896986 1439 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 14 00:38:54.398274 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:57794.service - OpenSSH per-connection server daemon (10.0.0.1:57794). Mar 14 00:38:54.479979 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 57794 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:38:54.482192 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:38:54.496818 systemd-logind[1437]: New session 11 of user core. Mar 14 00:38:54.516428 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:38:54.835088 sshd[5779]: pam_unix(sshd:session): session closed for user core Mar 14 00:38:54.853905 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:38:54.857657 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:57794.service: Deactivated successfully. Mar 14 00:38:54.866932 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:38:54.872828 systemd-logind[1437]: Removed session 11. Mar 14 00:38:57.799403 kubelet[2583]: E0314 00:38:57.799115 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:59.885972 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:57798.service - OpenSSH per-connection server daemon (10.0.0.1:57798). Mar 14 00:38:59.964855 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 57798 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:38:59.966730 sshd[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:38:59.977907 systemd-logind[1437]: New session 12 of user core. Mar 14 00:38:59.993958 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:39:00.317093 sshd[5795]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:00.329364 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:57798.service: Deactivated successfully. Mar 14 00:39:00.333997 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:39:00.336290 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:39:00.345349 systemd-logind[1437]: Removed session 12. Mar 14 00:39:03.952401 update_engine[1439]: I20260314 00:39:03.908840 1439 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:39:03.989550 update_engine[1439]: I20260314 00:39:03.984684 1439 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:39:03.989550 update_engine[1439]: I20260314 00:39:03.985870 1439 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:39:04.004272 update_engine[1439]: E20260314 00:39:04.004177 1439 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:39:04.004755 update_engine[1439]: I20260314 00:39:04.004693 1439 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 14 00:39:05.388857 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:39354.service - OpenSSH per-connection server daemon (10.0.0.1:39354). Mar 14 00:39:05.493402 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 39354 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:05.499070 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:05.512132 systemd-logind[1437]: New session 13 of user core. Mar 14 00:39:05.519796 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:39:05.980277 sshd[5821]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:05.993820 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:39354.service: Deactivated successfully. Mar 14 00:39:06.001044 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:39:06.003254 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:39:06.006323 systemd-logind[1437]: Removed session 13. Mar 14 00:39:11.004330 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:45376.service - OpenSSH per-connection server daemon (10.0.0.1:45376). Mar 14 00:39:11.076617 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 45376 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:11.089845 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:11.106737 systemd-logind[1437]: New session 14 of user core. Mar 14 00:39:11.119895 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:39:11.498822 sshd[5886]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:11.510760 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:45388.service - OpenSSH per-connection server daemon (10.0.0.1:45388). Mar 14 00:39:11.525874 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:45376.service: Deactivated successfully. Mar 14 00:39:11.528683 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:39:11.539935 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:39:11.557144 systemd-logind[1437]: Removed session 14. Mar 14 00:39:11.753363 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 45388 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:11.777975 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:11.793678 systemd-logind[1437]: New session 15 of user core. Mar 14 00:39:11.810456 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:39:12.362653 sshd[5907]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:12.384368 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:45388.service: Deactivated successfully. Mar 14 00:39:12.405787 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:39:12.412411 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:39:12.437457 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:45404.service - OpenSSH per-connection server daemon (10.0.0.1:45404). Mar 14 00:39:12.439990 systemd-logind[1437]: Removed session 15. Mar 14 00:39:12.565618 sshd[5921]: Accepted publickey for core from 10.0.0.1 port 45404 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:12.573358 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:12.595608 systemd-logind[1437]: New session 16 of user core. Mar 14 00:39:12.607364 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:39:12.910070 sshd[5921]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:12.922720 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:45404.service: Deactivated successfully. Mar 14 00:39:12.928519 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:39:12.932272 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:39:12.934806 systemd-logind[1437]: Removed session 16. Mar 14 00:39:13.631107 kubelet[2583]: I0314 00:39:13.629320 2583 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:39:13.864915 update_engine[1439]: I20260314 00:39:13.864737 1439 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:39:13.866232 update_engine[1439]: I20260314 00:39:13.866104 1439 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:39:13.867116 update_engine[1439]: I20260314 00:39:13.866422 1439 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:39:13.895543 update_engine[1439]: E20260314 00:39:13.894806 1439 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:39:13.895543 update_engine[1439]: I20260314 00:39:13.894905 1439 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:39:13.895543 update_engine[1439]: I20260314 00:39:13.894925 1439 omaha_request_action.cc:617] Omaha request response: Mar 14 00:39:13.895543 update_engine[1439]: E20260314 00:39:13.895223 1439 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 14 00:39:13.895543 update_engine[1439]: I20260314 00:39:13.895264 1439 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 14 00:39:13.895543 update_engine[1439]: I20260314 00:39:13.895275 1439 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:39:13.895543 update_engine[1439]: I20260314 00:39:13.895285 1439 update_attempter.cc:306] Processing Done. Mar 14 00:39:13.895543 update_engine[1439]: E20260314 00:39:13.895314 1439 update_attempter.cc:619] Update failed. Mar 14 00:39:13.906622 update_engine[1439]: I20260314 00:39:13.904400 1439 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.907944 1439 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.907987 1439 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.908092 1439 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.908133 1439 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.908142 1439 omaha_request_action.cc:272] Request: Mar 14 00:39:13.908934 update_engine[1439]: Mar 14 00:39:13.908934 update_engine[1439]: Mar 14 00:39:13.908934 update_engine[1439]: Mar 14 00:39:13.908934 update_engine[1439]: Mar 14 00:39:13.908934 update_engine[1439]: Mar 14 00:39:13.908934 update_engine[1439]: Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.908152 1439 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.908538 1439 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:39:13.908934 update_engine[1439]: I20260314 00:39:13.908878 1439 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:39:13.912711 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 14 00:39:13.933946 update_engine[1439]: E20260314 00:39:13.933646 1439 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933750 1439 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933763 1439 omaha_request_action.cc:617] Omaha request response: Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933772 1439 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933779 1439 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933785 1439 update_attempter.cc:306] Processing Done. Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933794 1439 update_attempter.cc:310] Error event sent. Mar 14 00:39:13.933946 update_engine[1439]: I20260314 00:39:13.933809 1439 update_check_scheduler.cc:74] Next update check in 42m15s Mar 14 00:39:13.934412 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 14 00:39:14.740869 containerd[1451]: time="2026-03-14T00:39:14.737531340Z" level=info msg="StopPodSandbox for \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\"" Mar 14 00:39:14.804783 kubelet[2583]: E0314 00:39:14.804376 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.195 [WARNING][5960] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0", GenerateName:"calico-kube-controllers-5cc94955b9-", Namespace:"calico-system", SelfLink:"", UID:"9e3a983d-b049-4edf-864f-a102bf11f3b8", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cc94955b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d", Pod:"calico-kube-controllers-5cc94955b9-l2gbs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48eba595ef2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.198 [INFO][5960] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.198 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" iface="eth0" netns="" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.198 [INFO][5960] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.198 [INFO][5960] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.429 [INFO][5969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.432 [INFO][5969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.432 [INFO][5969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.465 [WARNING][5969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.465 [INFO][5969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.478 [INFO][5969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:15.492452 containerd[1451]: 2026-03-14 00:39:15.486 [INFO][5960] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.492452 containerd[1451]: time="2026-03-14T00:39:15.492404945Z" level=info msg="TearDown network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\" successfully" Mar 14 00:39:15.492452 containerd[1451]: time="2026-03-14T00:39:15.492442595Z" level=info msg="StopPodSandbox for \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\" returns successfully" Mar 14 00:39:15.606896 containerd[1451]: time="2026-03-14T00:39:15.606626657Z" level=info msg="RemovePodSandbox for \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\"" Mar 14 00:39:15.611816 containerd[1451]: time="2026-03-14T00:39:15.611125364Z" level=info msg="Forcibly stopping sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\"" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.710 [WARNING][5987] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0", GenerateName:"calico-kube-controllers-5cc94955b9-", Namespace:"calico-system", SelfLink:"", UID:"9e3a983d-b049-4edf-864f-a102bf11f3b8", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cc94955b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8946e7e2fc116f7ecad347f4d1fcc0a933a0276dadcac64f3cc8fbc702df84d", Pod:"calico-kube-controllers-5cc94955b9-l2gbs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48eba595ef2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.710 [INFO][5987] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.710 [INFO][5987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" iface="eth0" netns="" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.710 [INFO][5987] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.710 [INFO][5987] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.796 [INFO][5995] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.796 [INFO][5995] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.796 [INFO][5995] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.819 [WARNING][5995] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.819 [INFO][5995] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" HandleID="k8s-pod-network.6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Workload="localhost-k8s-calico--kube--controllers--5cc94955b9--l2gbs-eth0" Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.826 [INFO][5995] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:15.840800 containerd[1451]: 2026-03-14 00:39:15.834 [INFO][5987] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec" Mar 14 00:39:15.840800 containerd[1451]: time="2026-03-14T00:39:15.840721787Z" level=info msg="TearDown network for sandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\" successfully" Mar 14 00:39:15.930401 containerd[1451]: time="2026-03-14T00:39:15.929922202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:15.930401 containerd[1451]: time="2026-03-14T00:39:15.930058066Z" level=info msg="RemovePodSandbox \"6e20a85a400977775b73e98a90e6550e1541d0c0de85cf30e395a4e5c23499ec\" returns successfully" Mar 14 00:39:15.961720 containerd[1451]: time="2026-03-14T00:39:15.959762605Z" level=info msg="StopPodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\"" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.140 [WARNING][6026] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" WorkloadEndpoint="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.141 [INFO][6026] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.141 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" iface="eth0" netns="" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.141 [INFO][6026] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.141 [INFO][6026] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.230 [INFO][6043] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.230 [INFO][6043] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.230 [INFO][6043] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.249 [WARNING][6043] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.249 [INFO][6043] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.255 [INFO][6043] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:16.266040 containerd[1451]: 2026-03-14 00:39:16.260 [INFO][6026] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.266040 containerd[1451]: time="2026-03-14T00:39:16.265952481Z" level=info msg="TearDown network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" successfully" Mar 14 00:39:16.266040 containerd[1451]: time="2026-03-14T00:39:16.265989911Z" level=info msg="StopPodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" returns successfully" Mar 14 00:39:16.267879 containerd[1451]: time="2026-03-14T00:39:16.267326354Z" level=info msg="RemovePodSandbox for \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\"" Mar 14 00:39:16.267879 containerd[1451]: time="2026-03-14T00:39:16.267363894Z" level=info msg="Forcibly stopping sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\"" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.385 [WARNING][6060] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" WorkloadEndpoint="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.385 [INFO][6060] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.385 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" iface="eth0" netns="" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.385 [INFO][6060] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.385 [INFO][6060] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.443 [INFO][6069] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.443 [INFO][6069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.443 [INFO][6069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.455 [WARNING][6069] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.455 [INFO][6069] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" HandleID="k8s-pod-network.0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Workload="localhost-k8s-whisker--5cb66794d--x6prv-eth0" Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.460 [INFO][6069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:16.472409 containerd[1451]: 2026-03-14 00:39:16.468 [INFO][6060] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b" Mar 14 00:39:16.474510 containerd[1451]: time="2026-03-14T00:39:16.472461632Z" level=info msg="TearDown network for sandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" successfully" Mar 14 00:39:16.491791 containerd[1451]: time="2026-03-14T00:39:16.491684794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:16.491964 containerd[1451]: time="2026-03-14T00:39:16.491813724Z" level=info msg="RemovePodSandbox \"0c75f79e1203eb291254685c726499d7633dcab804ddf396a1a9828da2b1752b\" returns successfully" Mar 14 00:39:16.492799 containerd[1451]: time="2026-03-14T00:39:16.492773328Z" level=info msg="StopPodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\"" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.613 [WARNING][6087] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"3444a880-39b3-4cba-abbf-267ebeaaa2fc", ResourceVersion:"1382", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6", Pod:"goldmane-9f7667bb8-d6kbv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673bd7f25cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.614 [INFO][6087] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.614 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" iface="eth0" netns="" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.614 [INFO][6087] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.614 [INFO][6087] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.692 [INFO][6095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.693 [INFO][6095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.693 [INFO][6095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.708 [WARNING][6095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.709 [INFO][6095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.723 [INFO][6095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:16.746641 containerd[1451]: 2026-03-14 00:39:16.735 [INFO][6087] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.746641 containerd[1451]: time="2026-03-14T00:39:16.743801487Z" level=info msg="TearDown network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" successfully" Mar 14 00:39:16.746641 containerd[1451]: time="2026-03-14T00:39:16.743840389Z" level=info msg="StopPodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" returns successfully" Mar 14 00:39:16.746641 containerd[1451]: time="2026-03-14T00:39:16.744412947Z" level=info msg="RemovePodSandbox for \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\"" Mar 14 00:39:16.746641 containerd[1451]: time="2026-03-14T00:39:16.744450857Z" level=info msg="Forcibly stopping sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\"" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.862 [WARNING][6113] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"3444a880-39b3-4cba-abbf-267ebeaaa2fc", ResourceVersion:"1382", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a83bdeef2dc3611a2a0bcb93daaf36c2a48bbc08aa7c8ebc1783ed75fe4b69e6", Pod:"goldmane-9f7667bb8-d6kbv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673bd7f25cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.862 [INFO][6113] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.864 [INFO][6113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" iface="eth0" netns="" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.865 [INFO][6113] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.867 [INFO][6113] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.926 [INFO][6121] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.926 [INFO][6121] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.926 [INFO][6121] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.949 [WARNING][6121] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.950 [INFO][6121] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" HandleID="k8s-pod-network.0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Workload="localhost-k8s-goldmane--9f7667bb8--d6kbv-eth0" Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.955 [INFO][6121] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:16.967444 containerd[1451]: 2026-03-14 00:39:16.961 [INFO][6113] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78" Mar 14 00:39:16.967444 containerd[1451]: time="2026-03-14T00:39:16.967349637Z" level=info msg="TearDown network for sandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" successfully" Mar 14 00:39:16.975433 containerd[1451]: time="2026-03-14T00:39:16.975375301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:16.975867 containerd[1451]: time="2026-03-14T00:39:16.975805643Z" level=info msg="RemovePodSandbox \"0a0678b778f01582054c7e364eaf8a38c62313fa43233ee0668c8efcff9dbf78\" returns successfully" Mar 14 00:39:16.976541 containerd[1451]: time="2026-03-14T00:39:16.976403038Z" level=info msg="StopPodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\"" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.083 [WARNING][6139] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q92fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72792cd6-748a-469c-b9e2-1b61caf289ee", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1", Pod:"csi-node-driver-q92fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05a5dead07a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.083 [INFO][6139] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.083 [INFO][6139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" iface="eth0" netns="" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.083 [INFO][6139] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.083 [INFO][6139] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.144 [INFO][6148] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.144 [INFO][6148] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.144 [INFO][6148] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.158 [WARNING][6148] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.158 [INFO][6148] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.162 [INFO][6148] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:17.174815 containerd[1451]: 2026-03-14 00:39:17.166 [INFO][6139] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.174815 containerd[1451]: time="2026-03-14T00:39:17.174742001Z" level=info msg="TearDown network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" successfully" Mar 14 00:39:17.174815 containerd[1451]: time="2026-03-14T00:39:17.174772288Z" level=info msg="StopPodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" returns successfully" Mar 14 00:39:17.177144 containerd[1451]: time="2026-03-14T00:39:17.175525474Z" level=info msg="RemovePodSandbox for \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\"" Mar 14 00:39:17.177144 containerd[1451]: time="2026-03-14T00:39:17.175594502Z" level=info msg="Forcibly stopping sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\"" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.253 [WARNING][6164] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q92fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72792cd6-748a-469c-b9e2-1b61caf289ee", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c022463b19ada7e6222f2550549b6f1f66b80d8d98ce86fd1dc659d60dd577f1", Pod:"csi-node-driver-q92fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05a5dead07a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.254 [INFO][6164] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.254 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" iface="eth0" netns="" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.254 [INFO][6164] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.254 [INFO][6164] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.328 [INFO][6173] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.328 [INFO][6173] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.328 [INFO][6173] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.366 [WARNING][6173] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.366 [INFO][6173] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" HandleID="k8s-pod-network.bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Workload="localhost-k8s-csi--node--driver--q92fd-eth0" Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.377 [INFO][6173] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:17.391690 containerd[1451]: 2026-03-14 00:39:17.383 [INFO][6164] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a" Mar 14 00:39:17.391690 containerd[1451]: time="2026-03-14T00:39:17.388449707Z" level=info msg="TearDown network for sandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" successfully" Mar 14 00:39:17.403195 containerd[1451]: time="2026-03-14T00:39:17.401966995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:17.403195 containerd[1451]: time="2026-03-14T00:39:17.402856426Z" level=info msg="RemovePodSandbox \"bdf0d80387ebeda4d335dc29a0ad194207c3b55e33fe202e3bd0e61153fe074a\" returns successfully" Mar 14 00:39:17.409503 containerd[1451]: time="2026-03-14T00:39:17.405878548Z" level=info msg="StopPodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\"" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.524 [WARNING][6190] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hsmxv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"46df48d4-8ce4-4d83-97c1-d2d7b89d6608", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559", Pod:"coredns-7d764666f9-hsmxv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia37fee2ae57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.525 [INFO][6190] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.525 [INFO][6190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" iface="eth0" netns="" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.525 [INFO][6190] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.525 [INFO][6190] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.575 [INFO][6199] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.576 [INFO][6199] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.576 [INFO][6199] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.590 [WARNING][6199] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.593 [INFO][6199] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.602 [INFO][6199] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:17.620647 containerd[1451]: 2026-03-14 00:39:17.610 [INFO][6190] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.620647 containerd[1451]: time="2026-03-14T00:39:17.618434961Z" level=info msg="TearDown network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" successfully" Mar 14 00:39:17.620647 containerd[1451]: time="2026-03-14T00:39:17.618514049Z" level=info msg="StopPodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" returns successfully" Mar 14 00:39:17.621414 containerd[1451]: time="2026-03-14T00:39:17.621222790Z" level=info msg="RemovePodSandbox for \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\"" Mar 14 00:39:17.621414 containerd[1451]: time="2026-03-14T00:39:17.621260680Z" level=info msg="Forcibly stopping sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\"" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.747 [WARNING][6215] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hsmxv-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"46df48d4-8ce4-4d83-97c1-d2d7b89d6608", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"265aeea3ccf970cb0aea9a75bf35312f5b943537d7ff35b6eea229055c4ec559", Pod:"coredns-7d764666f9-hsmxv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia37fee2ae57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.747 [INFO][6215] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.747 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" iface="eth0" netns="" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.748 [INFO][6215] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.748 [INFO][6215] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.817 [INFO][6223] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.817 [INFO][6223] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.817 [INFO][6223] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.835 [WARNING][6223] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.835 [INFO][6223] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" HandleID="k8s-pod-network.38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Workload="localhost-k8s-coredns--7d764666f9--hsmxv-eth0" Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.840 [INFO][6223] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:17.855681 containerd[1451]: 2026-03-14 00:39:17.847 [INFO][6215] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878" Mar 14 00:39:17.855681 containerd[1451]: time="2026-03-14T00:39:17.854894823Z" level=info msg="TearDown network for sandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" successfully" Mar 14 00:39:17.866691 containerd[1451]: time="2026-03-14T00:39:17.866411537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:17.866691 containerd[1451]: time="2026-03-14T00:39:17.866512354Z" level=info msg="RemovePodSandbox \"38c6baefde347a2d67980013c743dfc8fc7605c6f1d9584249e1b077526a5878\" returns successfully" Mar 14 00:39:17.867800 containerd[1451]: time="2026-03-14T00:39:17.867723043Z" level=info msg="StopPodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\"" Mar 14 00:39:17.957167 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:45416.service - OpenSSH per-connection server daemon (10.0.0.1:45416). Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:17.981 [WARNING][6243] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hmzcq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1528cef9-ef7a-4b03-b27c-111acf337f79", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb", Pod:"coredns-7d764666f9-hmzcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0598fe0896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:17.981 [INFO][6243] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:17.981 [INFO][6243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" iface="eth0" netns="" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:17.981 [INFO][6243] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:17.981 [INFO][6243] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.035 [INFO][6253] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.035 [INFO][6253] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.036 [INFO][6253] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.058 [WARNING][6253] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.058 [INFO][6253] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.063 [INFO][6253] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:18.072320 containerd[1451]: 2026-03-14 00:39:18.067 [INFO][6243] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.074029 containerd[1451]: time="2026-03-14T00:39:18.072335822Z" level=info msg="TearDown network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" successfully" Mar 14 00:39:18.074029 containerd[1451]: time="2026-03-14T00:39:18.072369124Z" level=info msg="StopPodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" returns successfully" Mar 14 00:39:18.074029 containerd[1451]: time="2026-03-14T00:39:18.073250405Z" level=info msg="RemovePodSandbox for \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\"" Mar 14 00:39:18.074029 containerd[1451]: time="2026-03-14T00:39:18.073298996Z" level=info msg="Forcibly stopping sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\"" Mar 14 00:39:18.148646 sshd[6251]: Accepted publickey for core from 10.0.0.1 port 45416 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:18.152839 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:18.163442 systemd-logind[1437]: New session 17 of user core. Mar 14 00:39:18.178864 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.165 [WARNING][6271] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--hmzcq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1528cef9-ef7a-4b03-b27c-111acf337f79", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c2303ae6c2bfafbc4985b35d2adc031afc93b30dc8071f44e59be5d77038ebb", Pod:"coredns-7d764666f9-hmzcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0598fe0896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.166 [INFO][6271] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.166 [INFO][6271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" iface="eth0" netns="" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.166 [INFO][6271] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.166 [INFO][6271] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.213 [INFO][6279] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.213 [INFO][6279] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.213 [INFO][6279] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.224 [WARNING][6279] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.224 [INFO][6279] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" HandleID="k8s-pod-network.6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Workload="localhost-k8s-coredns--7d764666f9--hmzcq-eth0" Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.230 [INFO][6279] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:18.247680 containerd[1451]: 2026-03-14 00:39:18.234 [INFO][6271] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792" Mar 14 00:39:18.247680 containerd[1451]: time="2026-03-14T00:39:18.239083920Z" level=info msg="TearDown network for sandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" successfully" Mar 14 00:39:18.255453 containerd[1451]: time="2026-03-14T00:39:18.255353061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:18.257093 containerd[1451]: time="2026-03-14T00:39:18.255468065Z" level=info msg="RemovePodSandbox \"6c0730b61bd61662dd0b9c34ba55b5507cc6cb9a598441705f1bf177112b5792\" returns successfully" Mar 14 00:39:18.257093 containerd[1451]: time="2026-03-14T00:39:18.256834866Z" level=info msg="StopPodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\"" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.332 [WARNING][6300] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"f5b883cf-5732-4d59-9bf1-5e7701804c52", ResourceVersion:"1422", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796", Pod:"calico-apiserver-d4cbf978c-p9t45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidece041a1c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.333 [INFO][6300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.333 [INFO][6300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" iface="eth0" netns="" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.333 [INFO][6300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.333 [INFO][6300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.390 [INFO][6314] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.392 [INFO][6314] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.392 [INFO][6314] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.410 [WARNING][6314] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.411 [INFO][6314] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.415 [INFO][6314] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:18.438382 containerd[1451]: 2026-03-14 00:39:18.427 [INFO][6300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.439238 containerd[1451]: time="2026-03-14T00:39:18.439201477Z" level=info msg="TearDown network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" successfully" Mar 14 00:39:18.439325 containerd[1451]: time="2026-03-14T00:39:18.439304649Z" level=info msg="StopPodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" returns successfully" Mar 14 00:39:18.442763 containerd[1451]: time="2026-03-14T00:39:18.442726170Z" level=info msg="RemovePodSandbox for \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\"" Mar 14 00:39:18.443651 containerd[1451]: time="2026-03-14T00:39:18.443622351Z" level=info msg="Forcibly stopping sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\"" Mar 14 00:39:18.606463 sshd[6251]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:18.612080 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:39:18.613770 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:45416.service: Deactivated successfully. Mar 14 00:39:18.618087 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:39:18.623069 systemd-logind[1437]: Removed session 17. Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.588 [WARNING][6332] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"f5b883cf-5732-4d59-9bf1-5e7701804c52", ResourceVersion:"1422", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb821a64698cd556e6fb92f95bec61457950ebfc84d34240f4211455f5a36796", Pod:"calico-apiserver-d4cbf978c-p9t45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidece041a1c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.589 [INFO][6332] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.591 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" iface="eth0" netns="" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.591 [INFO][6332] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.592 [INFO][6332] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.648 [INFO][6341] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.649 [INFO][6341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.649 [INFO][6341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.662 [WARNING][6341] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.662 [INFO][6341] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" HandleID="k8s-pod-network.7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Workload="localhost-k8s-calico--apiserver--d4cbf978c--p9t45-eth0" Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.666 [INFO][6341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:18.680439 containerd[1451]: 2026-03-14 00:39:18.673 [INFO][6332] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979" Mar 14 00:39:18.680439 containerd[1451]: time="2026-03-14T00:39:18.680270502Z" level=info msg="TearDown network for sandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" successfully" Mar 14 00:39:18.695661 containerd[1451]: time="2026-03-14T00:39:18.695449081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:18.695812 containerd[1451]: time="2026-03-14T00:39:18.695661467Z" level=info msg="RemovePodSandbox \"7237d9ba5c79b77f02312c45316e29ccd83472e9f9726994e25b45743dcbd979\" returns successfully" Mar 14 00:39:18.697527 containerd[1451]: time="2026-03-14T00:39:18.696341832Z" level=info msg="StopPodSandbox for \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\"" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.834 [WARNING][6361] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"e17452f4-a642-40cd-ac57-08b53d428d2c", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e", Pod:"calico-apiserver-d4cbf978c-xvlsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali756615e1354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.834 [INFO][6361] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.834 [INFO][6361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" iface="eth0" netns="" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.834 [INFO][6361] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.834 [INFO][6361] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.872 [INFO][6369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.872 [INFO][6369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.872 [INFO][6369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.888 [WARNING][6369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.888 [INFO][6369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.897 [INFO][6369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:18.910626 containerd[1451]: 2026-03-14 00:39:18.902 [INFO][6361] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:18.910626 containerd[1451]: time="2026-03-14T00:39:18.909627285Z" level=info msg="TearDown network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\" successfully" Mar 14 00:39:18.910626 containerd[1451]: time="2026-03-14T00:39:18.909662060Z" level=info msg="StopPodSandbox for \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\" returns successfully" Mar 14 00:39:18.914465 containerd[1451]: time="2026-03-14T00:39:18.913892149Z" level=info msg="RemovePodSandbox for \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\"" Mar 14 00:39:18.914465 containerd[1451]: time="2026-03-14T00:39:18.913932835Z" level=info msg="Forcibly stopping sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\"" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.023 [WARNING][6386] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0", GenerateName:"calico-apiserver-d4cbf978c-", Namespace:"calico-system", SelfLink:"", UID:"e17452f4-a642-40cd-ac57-08b53d428d2c", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4cbf978c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cec9a4cd5a06e999af7f14fd612ab0415d71df3f1d6d714cc8d9bcf9f3248b5e", Pod:"calico-apiserver-d4cbf978c-xvlsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali756615e1354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.024 [INFO][6386] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.025 [INFO][6386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" iface="eth0" netns="" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.025 [INFO][6386] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.025 [INFO][6386] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.126 [INFO][6394] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.128 [INFO][6394] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.130 [INFO][6394] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.152 [WARNING][6394] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.152 [INFO][6394] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" HandleID="k8s-pod-network.798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Workload="localhost-k8s-calico--apiserver--d4cbf978c--xvlsd-eth0" Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.156 [INFO][6394] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:39:19.173955 containerd[1451]: 2026-03-14 00:39:19.169 [INFO][6386] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153" Mar 14 00:39:19.173955 containerd[1451]: time="2026-03-14T00:39:19.173754991Z" level=info msg="TearDown network for sandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\" successfully" Mar 14 00:39:19.190405 containerd[1451]: time="2026-03-14T00:39:19.190340330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:39:19.190690 containerd[1451]: time="2026-03-14T00:39:19.190459873Z" level=info msg="RemovePodSandbox \"798dd03ca8a27f6cbbf133041fe72da5b806a14613f85082312bf1c2e7a51153\" returns successfully" Mar 14 00:39:23.660296 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:49574.service - OpenSSH per-connection server daemon (10.0.0.1:49574). Mar 14 00:39:23.788791 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 49574 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:23.794848 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:23.812477 systemd-logind[1437]: New session 18 of user core. Mar 14 00:39:23.827060 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:39:24.092380 sshd[6405]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:24.099918 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:49574.service: Deactivated successfully. Mar 14 00:39:24.103407 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:39:24.106249 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:39:24.108982 systemd-logind[1437]: Removed session 18. Mar 14 00:39:24.254865 systemd[1]: run-containerd-runc-k8s.io-9947fedb70bcf73b29f1f49d5b5ae0e942c8f3730592906674cba4f09f577aeb-runc.tX5suS.mount: Deactivated successfully. Mar 14 00:39:29.274952 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:49588.service - OpenSSH per-connection server daemon (10.0.0.1:49588). Mar 14 00:39:29.384203 sshd[6441]: Accepted publickey for core from 10.0.0.1 port 49588 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:29.391015 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:29.434380 systemd-logind[1437]: New session 19 of user core. Mar 14 00:39:29.469388 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:39:29.959096 sshd[6441]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:29.969287 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:49588.service: Deactivated successfully. Mar 14 00:39:29.973706 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:39:29.977829 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:39:29.992941 systemd-logind[1437]: Removed session 19. Mar 14 00:39:35.008063 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:34728.service - OpenSSH per-connection server daemon (10.0.0.1:34728). Mar 14 00:39:35.104900 sshd[6456]: Accepted publickey for core from 10.0.0.1 port 34728 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:35.109154 sshd[6456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:35.152498 systemd-logind[1437]: New session 20 of user core. Mar 14 00:39:35.163787 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:39:35.400204 sshd[6456]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:35.419089 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:34728.service: Deactivated successfully. Mar 14 00:39:35.421487 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:39:35.424427 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:39:35.436206 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:34740.service - OpenSSH per-connection server daemon (10.0.0.1:34740). Mar 14 00:39:35.439236 systemd-logind[1437]: Removed session 20. Mar 14 00:39:35.509805 sshd[6470]: Accepted publickey for core from 10.0.0.1 port 34740 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:35.519053 sshd[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:35.536953 systemd-logind[1437]: New session 21 of user core. Mar 14 00:39:35.555006 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:39:36.455310 sshd[6470]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:36.480678 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:34740.service: Deactivated successfully. Mar 14 00:39:36.487518 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:39:36.491514 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:39:36.513333 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:34742.service - OpenSSH per-connection server daemon (10.0.0.1:34742). Mar 14 00:39:36.517861 systemd-logind[1437]: Removed session 21. Mar 14 00:39:36.602719 sshd[6482]: Accepted publickey for core from 10.0.0.1 port 34742 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:36.607342 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:36.639957 systemd-logind[1437]: New session 22 of user core. Mar 14 00:39:36.660026 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:39:38.579397 sshd[6482]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:38.612076 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:34742.service: Deactivated successfully. Mar 14 00:39:38.616904 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:39:38.622722 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:39:38.672759 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:34758.service - OpenSSH per-connection server daemon (10.0.0.1:34758). Mar 14 00:39:38.676075 systemd-logind[1437]: Removed session 22. Mar 14 00:39:38.982944 sshd[6510]: Accepted publickey for core from 10.0.0.1 port 34758 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:38.991109 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:39.015643 systemd-logind[1437]: New session 23 of user core. Mar 14 00:39:39.040986 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:39:40.130874 sshd[6510]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:40.169096 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:34758.service: Deactivated successfully. Mar 14 00:39:40.173139 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:39:40.176403 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:39:40.188924 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:48072.service - OpenSSH per-connection server daemon (10.0.0.1:48072). Mar 14 00:39:40.196669 systemd-logind[1437]: Removed session 23. Mar 14 00:39:40.258037 sshd[6525]: Accepted publickey for core from 10.0.0.1 port 48072 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:40.260194 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:40.283708 systemd-logind[1437]: New session 24 of user core. Mar 14 00:39:40.300024 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:39:40.652323 sshd[6525]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:40.666996 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:48072.service: Deactivated successfully. Mar 14 00:39:40.669777 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:39:40.676294 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:39:40.679209 systemd-logind[1437]: Removed session 24. Mar 14 00:39:45.713754 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:48076.service - OpenSSH per-connection server daemon (10.0.0.1:48076). Mar 14 00:39:45.912761 sshd[6570]: Accepted publickey for core from 10.0.0.1 port 48076 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:45.912058 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:45.972957 systemd-logind[1437]: New session 25 of user core. Mar 14 00:39:45.980512 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:39:46.729512 sshd[6570]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:46.756170 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:39:46.761503 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:48076.service: Deactivated successfully. Mar 14 00:39:46.776024 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:39:46.779670 systemd-logind[1437]: Removed session 25. Mar 14 00:39:50.801794 kubelet[2583]: E0314 00:39:50.800094 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:51.755113 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:39562.service - OpenSSH per-connection server daemon (10.0.0.1:39562). Mar 14 00:39:51.822445 sshd[6642]: Accepted publickey for core from 10.0.0.1 port 39562 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:51.824288 sshd[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:51.861550 systemd-logind[1437]: New session 26 of user core. Mar 14 00:39:51.868297 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:39:52.099894 sshd[6642]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:52.105747 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:39562.service: Deactivated successfully. Mar 14 00:39:52.111783 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:39:52.116090 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:39:52.120218 systemd-logind[1437]: Removed session 26. Mar 14 00:39:55.805355 kubelet[2583]: E0314 00:39:55.805268 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:57.123223 systemd[1]: Started sshd@26-10.0.0.132:22-10.0.0.1:39572.service - OpenSSH per-connection server daemon (10.0.0.1:39572). Mar 14 00:39:57.273210 sshd[6710]: Accepted publickey for core from 10.0.0.1 port 39572 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:57.277955 sshd[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:57.285887 systemd-logind[1437]: New session 27 of user core. Mar 14 00:39:57.290193 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:39:57.731783 sshd[6710]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:57.741505 systemd[1]: sshd@26-10.0.0.132:22-10.0.0.1:39572.service: Deactivated successfully. Mar 14 00:39:57.750842 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:39:57.754521 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:39:57.756685 systemd-logind[1437]: Removed session 27. Mar 14 00:40:02.755246 systemd[1]: Started sshd@27-10.0.0.132:22-10.0.0.1:39326.service - OpenSSH per-connection server daemon (10.0.0.1:39326). Mar 14 00:40:02.845440 sshd[6724]: Accepted publickey for core from 10.0.0.1 port 39326 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:02.862376 sshd[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:02.875930 systemd-logind[1437]: New session 28 of user core. Mar 14 00:40:02.891931 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 00:40:03.190477 sshd[6724]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:03.201166 systemd-logind[1437]: Session 28 logged out. Waiting for processes to exit. Mar 14 00:40:03.202386 systemd[1]: sshd@27-10.0.0.132:22-10.0.0.1:39326.service: Deactivated successfully. Mar 14 00:40:03.220657 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 00:40:03.234436 systemd-logind[1437]: Removed session 28. Mar 14 00:40:03.800497 kubelet[2583]: E0314 00:40:03.798875 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:08.282883 systemd[1]: Started sshd@28-10.0.0.132:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Mar 14 00:40:08.412102 sshd[6739]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:08.421761 sshd[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:08.440681 systemd-logind[1437]: New session 29 of user core. Mar 14 00:40:08.489203 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 14 00:40:09.118922 sshd[6739]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:09.146399 systemd[1]: sshd@28-10.0.0.132:22-10.0.0.1:39336.service: Deactivated successfully. Mar 14 00:40:09.183164 systemd[1]: session-29.scope: Deactivated successfully. Mar 14 00:40:09.186848 systemd-logind[1437]: Session 29 logged out. Waiting for processes to exit. Mar 14 00:40:09.194435 systemd-logind[1437]: Removed session 29. Mar 14 00:40:09.799620 kubelet[2583]: E0314 00:40:09.799440 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:09.800191 kubelet[2583]: E0314 00:40:09.799713 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"