Apr 21 10:19:18.890357 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:19:18.890375 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:19:18.890384 kernel: BIOS-provided physical RAM map: Apr 21 10:19:18.890389 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 21 10:19:18.890393 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 21 10:19:18.890397 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:19:18.890402 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 21 10:19:18.890407 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 21 10:19:18.890411 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:19:18.890417 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:19:18.890421 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:19:18.890425 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:19:18.890430 kernel: NX (Execute Disable) protection: active Apr 21 10:19:18.890434 kernel: APIC: Static calls initialized Apr 21 10:19:18.890440 kernel: SMBIOS 2.8 present. Apr 21 10:19:18.890446 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 21 10:19:18.890450 kernel: Hypervisor detected: KVM Apr 21 10:19:18.890455 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:19:18.890516 kernel: kvm-clock: using sched offset of 4483774775 cycles Apr 21 10:19:18.890523 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:19:18.890528 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:19:18.890533 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:19:18.890538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:19:18.890542 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 10:19:18.890550 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:19:18.890555 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:19:18.890560 kernel: Using GB pages for direct mapping Apr 21 10:19:18.890564 kernel: ACPI: Early table checksum verification disabled Apr 21 10:19:18.890569 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 21 10:19:18.890574 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890579 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890583 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890588 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 21 10:19:18.890594 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890599 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890603 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890608 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:19:18.890613 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 21 10:19:18.890617 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 21 10:19:18.890622 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 21 10:19:18.890629 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 21 10:19:18.890636 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 21 10:19:18.890641 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 21 10:19:18.890646 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 21 10:19:18.890651 kernel: No NUMA configuration found Apr 21 10:19:18.890656 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 21 10:19:18.890661 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 21 10:19:18.890667 kernel: Zone ranges: Apr 21 10:19:18.890672 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:19:18.890677 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 21 10:19:18.890682 kernel: Normal empty Apr 21 10:19:18.890687 kernel: Movable zone start for each node Apr 21 10:19:18.890692 kernel: Early memory node ranges Apr 21 10:19:18.890697 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:19:18.890702 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 21 10:19:18.890707 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 21 10:19:18.890712 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:19:18.890718 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:19:18.890723 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 21 10:19:18.890728 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:19:18.890733 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:19:18.890738 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:19:18.890743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:19:18.890748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:19:18.890753 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:19:18.890758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:19:18.890765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:19:18.890769 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:19:18.890774 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:19:18.890779 kernel: TSC deadline timer available Apr 21 10:19:18.890784 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:19:18.890789 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:19:18.890794 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:19:18.890799 kernel: kvm-guest: setup PV sched yield Apr 21 10:19:18.890804 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:19:18.890810 kernel: Booting paravirtualized kernel on KVM Apr 21 10:19:18.890816 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:19:18.890821 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:19:18.890826 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:19:18.890831 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:19:18.890836 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:19:18.890841 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:19:18.890846 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:19:18.890851 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:19:18.890858 kernel: random: crng init done Apr 21 10:19:18.890863 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:19:18.890868 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:19:18.890873 kernel: Fallback order for Node 0: 0 Apr 21 10:19:18.890878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 21 10:19:18.890883 kernel: Policy zone: DMA32 Apr 21 10:19:18.890888 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:19:18.890893 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137900K reserved, 0K cma-reserved) Apr 21 10:19:18.890899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:19:18.890904 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:19:18.890909 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:19:18.890914 kernel: Dynamic Preempt: voluntary Apr 21 10:19:18.890919 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:19:18.890925 kernel: rcu: RCU event tracing is enabled. Apr 21 10:19:18.890930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:19:18.890935 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:19:18.890940 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:19:18.890945 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:19:18.890952 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:19:18.890957 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:19:18.890961 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:19:18.890966 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:19:18.890971 kernel: Console: colour VGA+ 80x25 Apr 21 10:19:18.890976 kernel: printk: console [ttyS0] enabled Apr 21 10:19:18.890981 kernel: ACPI: Core revision 20230628 Apr 21 10:19:18.890986 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:19:18.891011 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:19:18.891018 kernel: x2apic enabled Apr 21 10:19:18.891023 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:19:18.891028 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:19:18.891033 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:19:18.891038 kernel: kvm-guest: setup PV IPIs Apr 21 10:19:18.891043 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:19:18.891048 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:19:18.891060 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:19:18.891066 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:19:18.891071 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:19:18.891077 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:19:18.891085 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:19:18.891090 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:19:18.891096 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:19:18.891101 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:19:18.891107 kernel: RETBleed: Vulnerable Apr 21 10:19:18.891114 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:19:18.891120 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:19:18.891125 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:19:18.891131 kernel: active return thunk: its_return_thunk Apr 21 10:19:18.891137 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:19:18.891142 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:19:18.891148 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:19:18.891153 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:19:18.891159 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:19:18.891166 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:19:18.891172 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:19:18.891177 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:19:18.891182 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:19:18.891188 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:19:18.891193 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:19:18.891199 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:19:18.891204 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:19:18.891243 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:19:18.891252 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:19:18.891258 kernel: landlock: Up and running. Apr 21 10:19:18.891263 kernel: SELinux: Initializing. Apr 21 10:19:18.891269 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:19:18.891274 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:19:18.891280 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:19:18.891286 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:19:18.891291 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:19:18.891297 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:19:18.891304 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:19:18.891310 kernel: signal: max sigframe size: 3632 Apr 21 10:19:18.891315 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:19:18.891321 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:19:18.891326 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:19:18.891332 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:19:18.891337 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:19:18.891343 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:19:18.891349 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:19:18.891355 kernel: smpboot: Max logical packages: 1 Apr 21 10:19:18.891361 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:19:18.891367 kernel: devtmpfs: initialized Apr 21 10:19:18.891372 kernel: x86/mm: Memory block size: 128MB Apr 21 10:19:18.891377 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:19:18.891383 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:19:18.891389 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:19:18.891394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:19:18.891400 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:19:18.891407 kernel: audit: type=2000 audit(1776766758.421:1): state=initialized audit_enabled=0 res=1 Apr 21 10:19:18.891413 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:19:18.891418 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:19:18.891424 kernel: cpuidle: using governor menu Apr 21 10:19:18.891429 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:19:18.891435 kernel: dca service started, version 1.12.1 Apr 21 10:19:18.891441 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:19:18.891446 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:19:18.891452 kernel: PCI: Using configuration type 1 for base access Apr 21 10:19:18.891459 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:19:18.891494 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:19:18.891500 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:19:18.891505 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:19:18.891511 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:19:18.891516 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:19:18.891522 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:19:18.891527 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:19:18.891533 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:19:18.891541 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:19:18.891546 kernel: ACPI: Interpreter enabled Apr 21 10:19:18.891551 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:19:18.891557 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:19:18.891563 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:19:18.891568 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:19:18.891574 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:19:18.891579 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:19:18.891689 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:19:18.891754 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:19:18.891808 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:19:18.891815 kernel: PCI host bridge to bus 0000:00 Apr 21 10:19:18.891872 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:19:18.891922 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:19:18.891972 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:19:18.892050 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:19:18.892101 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:19:18.892150 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 21 10:19:18.892199 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:19:18.892266 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:19:18.892328 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:19:18.892391 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:19:18.892446 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:19:18.892540 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:19:18.892596 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:19:18.892661 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:19:18.892736 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 21 10:19:18.892794 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:19:18.892852 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:19:18.892912 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:19:18.892968 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 21 10:19:18.893048 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:19:18.893134 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:19:18.893262 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:19:18.893319 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 21 10:19:18.893377 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:19:18.893431 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 21 10:19:18.893540 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:19:18.893601 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:19:18.893655 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:19:18.893714 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:19:18.893770 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 21 10:19:18.893827 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 21 10:19:18.893885 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:19:18.893940 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:19:18.893947 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:19:18.893953 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:19:18.893958 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:19:18.893964 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:19:18.893972 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:19:18.893978 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:19:18.893983 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:19:18.894010 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:19:18.894016 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:19:18.894021 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:19:18.894027 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:19:18.894033 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:19:18.894038 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:19:18.894045 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:19:18.894051 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:19:18.894057 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:19:18.894062 kernel: iommu: Default domain type: Translated Apr 21 10:19:18.894068 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:19:18.894074 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:19:18.894079 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:19:18.894085 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 21 10:19:18.894091 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 21 10:19:18.894149 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:19:18.894203 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:19:18.894258 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:19:18.894265 kernel: vgaarb: loaded Apr 21 10:19:18.894271 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:19:18.894276 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:19:18.894282 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:19:18.894288 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:19:18.894293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:19:18.894301 kernel: pnp: PnP ACPI init Apr 21 10:19:18.894359 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:19:18.894367 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:19:18.894373 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:19:18.894379 kernel: NET: Registered PF_INET protocol family Apr 21 10:19:18.894384 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:19:18.894390 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:19:18.894396 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:19:18.894403 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:19:18.894409 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:19:18.894414 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:19:18.894420 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:19:18.894426 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:19:18.894431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:19:18.894437 kernel: NET: Registered PF_XDP protocol family Apr 21 10:19:18.894538 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:19:18.894589 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:19:18.894641 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:19:18.894690 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:19:18.894739 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:19:18.894788 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 21 10:19:18.894796 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:19:18.894801 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:19:18.894807 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:19:18.894813 kernel: Initialise system trusted keyrings Apr 21 10:19:18.894821 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:19:18.894827 kernel: Key type asymmetric registered Apr 21 10:19:18.894833 kernel: Asymmetric key parser 'x509' registered Apr 21 10:19:18.894838 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:19:18.894844 kernel: io scheduler mq-deadline registered Apr 21 10:19:18.894849 kernel: io scheduler kyber registered Apr 21 10:19:18.894855 kernel: io scheduler bfq registered Apr 21 10:19:18.894860 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:19:18.894866 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:19:18.894874 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:19:18.894879 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:19:18.894885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:19:18.894890 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:19:18.894896 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:19:18.894902 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:19:18.894907 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:19:18.894967 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:19:18.894977 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:19:18.895051 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:19:18.895104 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:19:18 UTC (1776766758) Apr 21 10:19:18.895156 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:19:18.895163 kernel: intel_pstate: CPU model not supported Apr 21 10:19:18.895169 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:19:18.895175 kernel: Segment Routing with IPv6 Apr 21 10:19:18.895180 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:19:18.895186 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:19:18.895193 kernel: Key type dns_resolver registered Apr 21 10:19:18.895199 kernel: IPI shorthand broadcast: enabled Apr 21 10:19:18.895204 kernel: sched_clock: Marking stable (845013637, 200358849)->(1196299774, -150927288) Apr 21 10:19:18.895210 kernel: registered taskstats version 1 Apr 21 10:19:18.895215 kernel: Loading compiled-in X.509 certificates Apr 21 10:19:18.895221 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:19:18.895226 kernel: Key type .fscrypt registered Apr 21 10:19:18.895232 kernel: Key type fscrypt-provisioning registered Apr 21 10:19:18.895237 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:19:18.895244 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:19:18.895249 kernel: ima: No architecture policies found Apr 21 10:19:18.895255 kernel: clk: Disabling unused clocks Apr 21 10:19:18.895260 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:19:18.895266 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:19:18.895271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:19:18.895277 kernel: Run /init as init process Apr 21 10:19:18.895282 kernel: with arguments: Apr 21 10:19:18.895288 kernel: /init Apr 21 10:19:18.895295 kernel: with environment: Apr 21 10:19:18.895300 kernel: HOME=/ Apr 21 10:19:18.895306 kernel: TERM=linux Apr 21 10:19:18.895313 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:19:18.895321 systemd[1]: Detected virtualization kvm. Apr 21 10:19:18.895327 systemd[1]: Detected architecture x86-64. Apr 21 10:19:18.895333 systemd[1]: Running in initrd. Apr 21 10:19:18.895339 systemd[1]: No hostname configured, using default hostname. Apr 21 10:19:18.895346 systemd[1]: Hostname set to . Apr 21 10:19:18.895352 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:19:18.895358 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:19:18.895364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:19:18.895370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:19:18.895376 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:19:18.895383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:19:18.895389 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:19:18.895397 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:19:18.895414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:19:18.895420 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:19:18.895426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:19:18.895434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:19:18.895440 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:19:18.895728 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:19:18.895737 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:19:18.895743 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:19:18.895749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:19:18.895755 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:19:18.895761 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:19:18.895768 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:19:18.895776 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:19:18.895782 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:19:18.895788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:19:18.895794 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:19:18.895800 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:19:18.895807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:19:18.895814 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:19:18.895820 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:19:18.895826 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:19:18.895833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:19:18.895839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:19:18.895845 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:19:18.895867 systemd-journald[194]: Collecting audit messages is disabled. Apr 21 10:19:18.895884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:19:18.895891 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:19:18.895901 systemd-journald[194]: Journal started Apr 21 10:19:18.895916 systemd-journald[194]: Runtime Journal (/run/log/journal/caedddb13cbf4ab3b8508aa942e3fc07) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:19:18.900499 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:19:18.902555 systemd-modules-load[195]: Inserted module 'overlay' Apr 21 10:19:18.910660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:19:19.012634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:19:19.012659 kernel: Bridge firewalling registered Apr 21 10:19:18.927328 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 21 10:19:19.032713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:19:19.033156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:19:19.037393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:19:19.041262 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:19:19.046442 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:19:19.050075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:19:19.057745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:19:19.061961 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:19:19.067080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:19:19.069709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:19:19.070283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:19:19.072353 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:19:19.077109 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:19:19.099881 systemd-resolved[226]: Positive Trust Anchors: Apr 21 10:19:19.099893 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:19:19.099917 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:19:19.119942 dracut-cmdline[231]: dracut-dracut-053 Apr 21 10:19:19.119942 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:19:19.101867 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 21 10:19:19.102657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:19:19.106434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:19:19.182533 kernel: SCSI subsystem initialized Apr 21 10:19:19.191527 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:19:19.201529 kernel: iscsi: registered transport (tcp) Apr 21 10:19:19.221120 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:19:19.221199 kernel: QLogic iSCSI HBA Driver Apr 21 10:19:19.253407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:19:19.263762 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:19:19.286281 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:19:19.286347 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:19:19.288150 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:19:19.326545 kernel: raid6: avx512x4 gen() 44767 MB/s Apr 21 10:19:19.343653 kernel: raid6: avx512x2 gen() 42999 MB/s Apr 21 10:19:19.360668 kernel: raid6: avx512x1 gen() 43032 MB/s Apr 21 10:19:19.377534 kernel: raid6: avx2x4 gen() 36291 MB/s Apr 21 10:19:19.394564 kernel: raid6: avx2x2 gen() 35714 MB/s Apr 21 10:19:19.412797 kernel: raid6: avx2x1 gen() 27647 MB/s Apr 21 10:19:19.412869 kernel: raid6: using algorithm avx512x4 gen() 44767 MB/s Apr 21 10:19:19.431669 kernel: raid6: .... xor() 9981 MB/s, rmw enabled Apr 21 10:19:19.431686 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:19:19.450524 kernel: xor: automatically using best checksumming function avx Apr 21 10:19:19.577556 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:19:19.587154 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:19:19.600695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:19:19.610350 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 21 10:19:19.612963 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:19:19.613819 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:19:19.631220 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 21 10:19:19.656320 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:19:19.668651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:19:19.703332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:19:19.712657 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:19:19.723328 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:19:19.723790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:19:19.727294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:19:19.736917 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:19:19.746502 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:19:19.747664 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:19:19.753179 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:19:19.758063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:19:19.763095 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:19:19.758163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:19:19.771728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:19:19.771746 kernel: GPT:9289727 != 19775487 Apr 21 10:19:19.771753 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:19:19.771760 kernel: GPT:9289727 != 19775487 Apr 21 10:19:19.771767 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:19:19.771774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:19:19.770302 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:19:19.778205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:19:19.778451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:19:19.782093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:19:19.792641 kernel: libata version 3.00 loaded. Apr 21 10:19:19.791750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:19:19.794368 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:19:19.809520 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:19:19.809559 kernel: AES CTR mode by8 optimization enabled Apr 21 10:19:19.812167 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:19:19.812312 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:19:19.815566 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:19:19.815694 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:19:19.819530 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (470) Apr 21 10:19:19.822521 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 21 10:19:19.828523 kernel: scsi host0: ahci Apr 21 10:19:19.828655 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:19:19.955586 kernel: scsi host1: ahci Apr 21 10:19:19.955756 kernel: scsi host2: ahci Apr 21 10:19:19.955829 kernel: scsi host3: ahci Apr 21 10:19:19.955904 kernel: scsi host4: ahci Apr 21 10:19:19.955969 kernel: scsi host5: ahci Apr 21 10:19:19.956066 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 21 10:19:19.956074 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 21 10:19:19.956082 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 21 10:19:19.956089 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 21 10:19:19.956096 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 21 10:19:19.956103 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 21 10:19:19.956353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:19:19.961968 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:19:19.964346 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:19:19.966843 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:19:19.973673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:19:19.992794 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:19:19.997901 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:19:20.005512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:19:20.005550 disk-uuid[556]: Primary Header is updated. Apr 21 10:19:20.005550 disk-uuid[556]: Secondary Entries is updated. Apr 21 10:19:20.005550 disk-uuid[556]: Secondary Header is updated. Apr 21 10:19:20.017920 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:19:20.143792 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:19:20.143865 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:19:20.144504 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:19:20.145523 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:19:20.147521 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:19:20.150520 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:19:20.150533 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:19:20.152364 kernel: ata3.00: applying bridge limits Apr 21 10:19:20.153575 kernel: ata3.00: configured for UDMA/100 Apr 21 10:19:20.156500 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:19:20.201307 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:19:20.201614 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:19:20.216526 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:19:21.015538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:19:21.016138 disk-uuid[559]: The operation has completed successfully. Apr 21 10:19:21.038262 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:19:21.038367 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:19:21.055716 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:19:21.061328 sh[597]: Success Apr 21 10:19:21.075515 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:19:21.105998 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:19:21.126111 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:19:21.128705 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:19:21.149315 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:19:21.149363 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:19:21.149372 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:19:21.151256 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:19:21.152665 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:19:21.159172 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:19:21.163390 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:19:21.172730 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:19:21.175128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:19:21.187038 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:19:21.187059 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:19:21.187067 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:19:21.192503 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:19:21.199886 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:19:21.203215 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:19:21.211454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:19:21.222646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:19:21.270591 ignition[705]: Ignition 2.19.0 Apr 21 10:19:21.270613 ignition[705]: Stage: fetch-offline Apr 21 10:19:21.270643 ignition[705]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:19:21.270650 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:19:21.270731 ignition[705]: parsed url from cmdline: "" Apr 21 10:19:21.270734 ignition[705]: no config URL provided Apr 21 10:19:21.270738 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:19:21.270743 ignition[705]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:19:21.270760 ignition[705]: op(1): [started] loading QEMU firmware config module Apr 21 10:19:21.285365 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:19:21.270764 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:19:21.278117 ignition[705]: op(1): [finished] loading QEMU firmware config module Apr 21 10:19:21.296704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:19:21.313050 systemd-networkd[784]: lo: Link UP Apr 21 10:19:21.313072 systemd-networkd[784]: lo: Gained carrier Apr 21 10:19:21.313950 systemd-networkd[784]: Enumeration completed Apr 21 10:19:21.314193 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:19:21.314593 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:19:21.314596 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:19:21.315581 systemd-networkd[784]: eth0: Link UP Apr 21 10:19:21.315583 systemd-networkd[784]: eth0: Gained carrier Apr 21 10:19:21.315589 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:19:21.317065 systemd[1]: Reached target network.target - Network. Apr 21 10:19:21.345577 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:19:21.450192 ignition[705]: parsing config with SHA512: 0e292508994a5d5160140433ef4253111a699d1b928b2c6ed6dc60a9da13b14f92da593c9d005f6d5ff54461f31a7bade0addf5f97bad1f08147af071405faba Apr 21 10:19:21.454193 unknown[705]: fetched base config from "system" Apr 21 10:19:21.454205 unknown[705]: fetched user config from "qemu" Apr 21 10:19:21.455181 ignition[705]: fetch-offline: fetch-offline passed Apr 21 10:19:21.456806 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:19:21.455349 ignition[705]: Ignition finished successfully Apr 21 10:19:21.457814 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:19:21.461034 systemd-resolved[226]: Detected conflict on linux IN A 10.0.0.55 Apr 21 10:19:21.461041 systemd-resolved[226]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Apr 21 10:19:21.472852 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:19:21.485260 ignition[789]: Ignition 2.19.0 Apr 21 10:19:21.485280 ignition[789]: Stage: kargs Apr 21 10:19:21.485409 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:19:21.485417 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:19:21.486144 ignition[789]: kargs: kargs passed Apr 21 10:19:21.486175 ignition[789]: Ignition finished successfully Apr 21 10:19:21.496124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:19:21.511799 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:19:21.526299 ignition[797]: Ignition 2.19.0 Apr 21 10:19:21.526308 ignition[797]: Stage: disks Apr 21 10:19:21.526446 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:19:21.526453 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:19:21.527141 ignition[797]: disks: disks passed Apr 21 10:19:21.527171 ignition[797]: Ignition finished successfully Apr 21 10:19:21.534319 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:19:21.535132 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:19:21.538108 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:19:21.544048 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:19:21.547831 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:19:21.548192 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:19:21.564669 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:19:21.581832 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:19:21.586521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:19:21.604617 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:19:21.697518 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:19:21.697877 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:19:21.699922 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:19:21.712610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:19:21.715117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:19:21.720508 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Apr 21 10:19:21.721933 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:19:21.730622 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:19:21.730644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:19:21.730653 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:19:21.721988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:19:21.722034 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:19:21.742527 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:19:21.733895 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:19:21.738067 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:19:21.748311 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:19:21.781587 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:19:21.785728 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:19:21.788879 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:19:21.792529 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:19:21.881598 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:19:21.897005 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:19:21.900410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:19:21.906723 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:19:21.925842 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:19:21.933447 ignition[928]: INFO : Ignition 2.19.0 Apr 21 10:19:21.933447 ignition[928]: INFO : Stage: mount Apr 21 10:19:21.936180 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:19:21.936180 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:19:21.936180 ignition[928]: INFO : mount: mount passed Apr 21 10:19:21.936180 ignition[928]: INFO : Ignition finished successfully Apr 21 10:19:21.938664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:19:21.952783 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:19:22.147756 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:19:22.161778 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:19:22.172526 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Apr 21 10:19:22.172565 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:19:22.175916 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:19:22.175930 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:19:22.181502 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:19:22.182832 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:19:22.208936 ignition[961]: INFO : Ignition 2.19.0 Apr 21 10:19:22.208936 ignition[961]: INFO : Stage: files Apr 21 10:19:22.211878 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:19:22.211878 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:19:22.211878 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:19:22.211878 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:19:22.211878 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:19:22.223565 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:19:22.223565 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:19:22.223565 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:19:22.223565 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:19:22.223565 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:19:22.223565 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:19:22.223565 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:19:22.216986 unknown[961]: wrote ssh authorized keys file for user: core Apr 21 10:19:22.275879 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:19:22.380164 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:19:22.380164 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:19:22.386987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:19:22.386987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:19:22.386987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:19:22.386987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:19:22.386987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:19:22.386987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:19:22.405766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:19:22.706296 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:19:22.862116 systemd-networkd[784]: eth0: Gained IPv6LL Apr 21 10:19:22.968988 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:19:22.968988 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 21 10:19:22.975323 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:19:23.014670 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:19:23.014670 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:19:23.014670 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:19:23.014670 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:19:23.014670 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:19:23.014670 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:19:23.014670 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:19:23.014670 ignition[961]: INFO : files: files passed Apr 21 10:19:23.014670 ignition[961]: INFO : Ignition finished successfully Apr 21 10:19:22.995640 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:19:23.015694 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:19:23.020162 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:19:23.024506 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:19:23.058035 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:19:23.024595 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:19:23.062685 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:19:23.062685 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:19:23.035331 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:19:23.071095 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:19:23.039500 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:19:23.045117 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:19:23.068040 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:19:23.068131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:19:23.070634 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:19:23.070771 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:19:23.076781 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:19:23.081375 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:19:23.106185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:19:23.109635 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:19:23.122886 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:19:23.123103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:19:23.127233 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:19:23.134134 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:19:23.134265 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:19:23.140013 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:19:23.140212 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:19:23.143878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:19:23.146612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:19:23.152353 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:19:23.156247 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:19:23.158398 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:19:23.161548 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:19:23.165859 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:19:23.169355 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:19:23.172892 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:19:23.173005 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:19:23.180926 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:19:23.181091 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:19:23.184703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:19:23.185003 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:19:23.188700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:19:23.188823 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:19:23.198350 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:19:23.198519 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:19:23.200276 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:19:23.203895 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:19:23.212640 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:19:23.217400 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:19:23.217880 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:19:23.222306 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:19:23.222394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:19:23.223899 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:19:23.223979 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:19:23.226920 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:19:23.227061 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:19:23.230157 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:19:23.230242 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:19:23.246724 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:19:23.251093 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:19:23.252704 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:19:23.260675 ignition[1015]: INFO : Ignition 2.19.0 Apr 21 10:19:23.260675 ignition[1015]: INFO : Stage: umount Apr 21 10:19:23.260675 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:19:23.260675 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:19:23.260675 ignition[1015]: INFO : umount: umount passed Apr 21 10:19:23.260675 ignition[1015]: INFO : Ignition finished successfully Apr 21 10:19:23.252802 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:19:23.255133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:19:23.255365 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:19:23.262824 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:19:23.262957 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:19:23.266275 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:19:23.266350 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:19:23.269398 systemd[1]: Stopped target network.target - Network. Apr 21 10:19:23.272229 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:19:23.272271 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:19:23.275919 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:19:23.275962 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:19:23.280945 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:19:23.280982 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:19:23.283715 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:19:23.283749 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:19:23.287732 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:19:23.292909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:19:23.295755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:19:23.306511 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:19:23.306692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:19:23.309956 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:19:23.310012 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:19:23.315643 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 21 10:19:23.317248 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:19:23.317395 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:19:23.320925 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:19:23.321051 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:19:23.324977 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:19:23.325015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:19:23.326361 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:19:23.326407 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:19:23.342676 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:19:23.344123 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:19:23.344178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:19:23.349581 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:19:23.349634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:19:23.353417 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:19:23.353458 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:19:23.357426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:19:23.369359 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:19:23.369501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:19:23.374218 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:19:23.374358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:19:23.379237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:19:23.379274 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:19:23.381257 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:19:23.381285 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:19:23.384814 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:19:23.384851 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:19:23.392954 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:19:23.393056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:19:23.397573 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:19:23.397624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:19:23.415880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:19:23.416359 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:19:23.416411 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:19:23.419708 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:19:23.419752 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:19:23.423954 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:19:23.423995 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:19:23.427973 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:19:23.428047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:19:23.432562 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:19:23.432655 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:19:23.440668 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:19:23.459675 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:19:23.466438 systemd[1]: Switching root. Apr 21 10:19:23.497075 systemd-journald[194]: Journal stopped Apr 21 10:19:24.324897 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 21 10:19:24.324943 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:19:24.324958 kernel: SELinux: policy capability open_perms=1 Apr 21 10:19:24.324966 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:19:24.324974 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:19:24.324981 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:19:24.324989 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:19:24.324997 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:19:24.325006 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:19:24.325014 kernel: audit: type=1403 audit(1776766763.665:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:19:24.325051 systemd[1]: Successfully loaded SELinux policy in 43.441ms. Apr 21 10:19:24.325071 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.048ms. Apr 21 10:19:24.325082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:19:24.325091 systemd[1]: Detected virtualization kvm. Apr 21 10:19:24.325099 systemd[1]: Detected architecture x86-64. Apr 21 10:19:24.325107 systemd[1]: Detected first boot. Apr 21 10:19:24.325115 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:19:24.325125 zram_generator::config[1077]: No configuration found. Apr 21 10:19:24.325134 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:19:24.325142 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:19:24.325150 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:19:24.325159 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:19:24.325167 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:19:24.325175 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:19:24.325182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:19:24.325192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:19:24.325200 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:19:24.325207 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:19:24.325217 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:19:24.325225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:19:24.325232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:19:24.325240 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:19:24.325248 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:19:24.325258 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:19:24.325267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:19:24.325275 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:19:24.325283 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:19:24.325290 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:19:24.325298 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:19:24.325305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:19:24.325313 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:19:24.325323 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:19:24.325330 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:19:24.325338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:19:24.325346 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:19:24.325354 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:19:24.325361 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:19:24.325369 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:19:24.325377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:19:24.325384 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:19:24.325392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:19:24.325401 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:19:24.325409 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:19:24.325417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:24.325425 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:19:24.325433 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:19:24.325440 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:19:24.325449 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:19:24.325456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:19:24.325501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:19:24.325509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:19:24.325517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:19:24.325525 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:19:24.325533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:19:24.325540 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:19:24.325548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:19:24.325556 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:19:24.325564 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 10:19:24.325574 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 10:19:24.325581 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:19:24.325588 kernel: fuse: init (API version 7.39) Apr 21 10:19:24.325595 kernel: loop: module loaded Apr 21 10:19:24.325613 systemd-journald[1176]: Collecting audit messages is disabled. Apr 21 10:19:24.325632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:19:24.325640 systemd-journald[1176]: Journal started Apr 21 10:19:24.325660 systemd-journald[1176]: Runtime Journal (/run/log/journal/caedddb13cbf4ab3b8508aa942e3fc07) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:19:24.358808 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:19:24.361515 kernel: ACPI: bus type drm_connector registered Apr 21 10:19:24.387578 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:19:24.405651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:19:24.425602 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:24.442759 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:19:24.448421 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:19:24.451310 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:19:24.454283 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:19:24.456906 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:19:24.459693 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:19:24.462349 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:19:24.465837 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:19:24.469380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:19:24.473330 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:19:24.474367 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:19:24.478827 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:19:24.479439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:19:24.483412 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:19:24.484201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:19:24.488143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:19:24.488865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:19:24.493230 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:19:24.493949 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:19:24.499377 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:19:24.500791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:19:24.505274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:19:24.508859 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:19:24.512929 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:19:24.517323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:19:24.549371 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:19:24.570341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:19:24.577907 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:19:24.580697 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:19:24.585995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:19:24.593278 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:19:24.595912 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:19:24.600324 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:19:24.602352 systemd-journald[1176]: Time spent on flushing to /var/log/journal/caedddb13cbf4ab3b8508aa942e3fc07 is 9.818ms for 940 entries. Apr 21 10:19:24.602352 systemd-journald[1176]: System Journal (/var/log/journal/caedddb13cbf4ab3b8508aa942e3fc07) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:19:24.627660 systemd-journald[1176]: Received client request to flush runtime journal. Apr 21 10:19:24.604826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:19:24.609124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:19:24.617157 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:19:24.636645 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:19:24.639961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:19:24.642553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:19:24.645099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:19:24.647832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:19:24.650392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:19:24.656575 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:19:24.660400 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:19:24.706356 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 21 10:19:24.706380 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 21 10:19:24.710818 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:19:24.719200 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:19:24.743892 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:19:24.755643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:19:24.771403 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Apr 21 10:19:24.771436 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Apr 21 10:19:24.774976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:19:24.964250 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:19:24.977371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:19:24.995120 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Apr 21 10:19:25.012268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:19:25.034990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:19:25.043575 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1262) Apr 21 10:19:25.053324 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 10:19:25.082631 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:19:25.090151 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:19:25.098571 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:19:25.107547 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:19:25.113963 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:19:25.114195 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:19:25.114288 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:19:25.117857 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:19:25.153639 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:19:25.166531 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:19:25.172922 systemd-networkd[1260]: lo: Link UP Apr 21 10:19:25.172928 systemd-networkd[1260]: lo: Gained carrier Apr 21 10:19:25.174202 systemd-networkd[1260]: Enumeration completed Apr 21 10:19:25.174349 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:19:25.177588 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:19:25.177634 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:19:25.178401 systemd-networkd[1260]: eth0: Link UP Apr 21 10:19:25.178445 systemd-networkd[1260]: eth0: Gained carrier Apr 21 10:19:25.178513 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:19:25.235543 systemd-networkd[1260]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:19:25.235731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:19:25.243750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:19:25.304597 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:19:25.314700 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:19:25.320715 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:19:25.345578 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:19:25.436261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:19:25.439865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:19:25.451637 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:19:25.456114 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:19:25.484150 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:19:25.486820 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:19:25.489231 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:19:25.489264 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:19:25.491248 systemd[1]: Reached target machines.target - Containers. Apr 21 10:19:25.493893 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:19:25.514787 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:19:25.518524 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:19:25.520447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:19:25.521300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:19:25.524343 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:19:25.528948 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:19:25.535090 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:19:25.538270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:19:25.542521 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:19:25.550561 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:19:25.551966 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:19:25.561518 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:19:25.585504 kernel: loop1: detected capacity change from 0 to 228704 Apr 21 10:19:25.618376 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:19:25.656560 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:19:25.669528 kernel: loop4: detected capacity change from 0 to 228704 Apr 21 10:19:25.679513 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:19:25.688558 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:19:25.688937 (sd-merge)[1311]: Merged extensions into '/usr'. Apr 21 10:19:25.691780 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:19:25.691805 systemd[1]: Reloading... Apr 21 10:19:25.725527 zram_generator::config[1339]: No configuration found. Apr 21 10:19:25.744441 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:19:25.812071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:19:25.850335 systemd[1]: Reloading finished in 158 ms. Apr 21 10:19:25.866871 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:19:25.869756 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:19:25.884602 systemd[1]: Starting ensure-sysext.service... Apr 21 10:19:25.887070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:19:25.890782 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:19:25.890810 systemd[1]: Reloading... Apr 21 10:19:25.902335 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:19:25.902597 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:19:25.903113 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:19:25.903293 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 21 10:19:25.903350 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 21 10:19:25.905932 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:19:25.905962 systemd-tmpfiles[1384]: Skipping /boot Apr 21 10:19:25.911257 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:19:25.911265 systemd-tmpfiles[1384]: Skipping /boot Apr 21 10:19:25.926582 zram_generator::config[1413]: No configuration found. Apr 21 10:19:26.016031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:19:26.066450 systemd[1]: Reloading finished in 175 ms. Apr 21 10:19:26.094952 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:19:26.105553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:26.107320 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:19:26.110566 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:19:26.112818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:19:26.113627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:19:26.117368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:19:26.121659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:19:26.123735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:19:26.126385 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:19:26.132802 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:19:26.137654 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:19:26.140197 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:26.141434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:19:26.141672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:19:26.144812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:19:26.144966 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:19:26.148150 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:19:26.148410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:19:26.151123 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:19:26.156100 augenrules[1486]: No rules Apr 21 10:19:26.157195 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:19:26.159935 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:19:26.169788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:26.169994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:19:26.171111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:19:26.176685 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:19:26.180235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:19:26.182220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:19:26.185096 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:19:26.187150 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:19:26.187284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:26.188626 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:19:26.191323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:19:26.191448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:19:26.194852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:19:26.194987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:19:26.197771 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:19:26.197933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:19:26.200378 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:19:26.207918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:26.208124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:19:26.212724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:19:26.216452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:19:26.219278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:19:26.221794 systemd-resolved[1474]: Positive Trust Anchors: Apr 21 10:19:26.221822 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:19:26.221847 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:19:26.224090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:19:26.226081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:19:26.226246 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:19:26.226362 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:19:26.226455 systemd-resolved[1474]: Defaulting to hostname 'linux'. Apr 21 10:19:26.227334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:19:26.227503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:19:26.229919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:19:26.232268 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:19:26.232388 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:19:26.234692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:19:26.234806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:19:26.237370 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:19:26.237543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:19:26.240988 systemd[1]: Finished ensure-sysext.service. Apr 21 10:19:26.245193 systemd[1]: Reached target network.target - Network. Apr 21 10:19:26.246903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:19:26.249090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:19:26.249147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:19:26.258727 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:19:26.297333 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:19:27.277889 systemd-resolved[1474]: Clock change detected. Flushing caches. Apr 21 10:19:27.278022 systemd-timesyncd[1530]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:19:27.278052 systemd-timesyncd[1530]: Initial clock synchronization to Tue 2026-04-21 10:19:27.277646 UTC. Apr 21 10:19:27.279777 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:19:27.281845 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:19:27.284180 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:19:27.286527 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:19:27.288859 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:19:27.288932 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:19:27.290577 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:19:27.292565 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:19:27.294605 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:19:27.297013 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:19:27.299342 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:19:27.302673 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:19:27.305374 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:19:27.313827 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:19:27.316054 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:19:27.317844 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:19:27.319693 systemd[1]: System is tainted: cgroupsv1 Apr 21 10:19:27.319748 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:19:27.319763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:19:27.320717 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:19:27.323547 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:19:27.328036 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:19:27.330791 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:19:27.332774 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:19:27.333614 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:19:27.337155 jq[1536]: false Apr 21 10:19:27.340011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:19:27.346071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:19:27.347752 extend-filesystems[1538]: Found loop3 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found loop4 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found loop5 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found sr0 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda1 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda2 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda3 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found usr Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda4 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda6 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda7 Apr 21 10:19:27.350253 extend-filesystems[1538]: Found vda9 Apr 21 10:19:27.350253 extend-filesystems[1538]: Checking size of /dev/vda9 Apr 21 10:19:27.384037 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:19:27.384059 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1248) Apr 21 10:19:27.384069 extend-filesystems[1538]: Resized partition /dev/vda9 Apr 21 10:19:27.356789 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:19:27.355031 dbus-daemon[1535]: [system] SELinux support is enabled Apr 21 10:19:27.389291 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:19:27.379127 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:19:27.389304 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:19:27.395960 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:19:27.399202 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:19:27.402955 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:19:27.406164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:19:27.412942 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:19:27.412942 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:19:27.412942 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:19:27.421800 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Apr 21 10:19:27.416204 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:19:27.423654 update_engine[1563]: I20260421 10:19:27.417341 1563 main.cc:92] Flatcar Update Engine starting Apr 21 10:19:27.423654 update_engine[1563]: I20260421 10:19:27.419601 1563 update_check_scheduler.cc:74] Next update check in 7m38s Apr 21 10:19:27.416382 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:19:27.416592 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:19:27.416746 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:19:27.423702 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:19:27.423827 systemd-logind[1561]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:19:27.423839 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:19:27.424262 jq[1564]: true Apr 21 10:19:27.423878 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:19:27.425332 systemd-logind[1561]: New seat seat0. Apr 21 10:19:27.428868 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:19:27.447260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:19:27.447460 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:19:27.457888 jq[1572]: true Apr 21 10:19:27.459575 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:19:27.464687 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:19:27.466705 tar[1570]: linux-amd64/LICENSE Apr 21 10:19:27.466705 tar[1570]: linux-amd64/helm Apr 21 10:19:27.470617 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:19:27.473607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:19:27.473749 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:19:27.477244 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:19:27.477347 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:19:27.480437 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:19:27.485124 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:19:27.489609 systemd-networkd[1260]: eth0: Gained IPv6LL Apr 21 10:19:27.492637 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:19:27.495523 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:19:27.499215 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:19:27.504027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:19:27.507340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:19:27.529141 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:19:27.529336 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:19:27.532337 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:19:27.547226 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:19:27.548277 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:19:27.551452 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:19:27.564367 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:19:27.584089 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:19:27.651569 containerd[1573]: time="2026-04-21T10:19:27.651506242Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:19:27.681069 containerd[1573]: time="2026-04-21T10:19:27.681037219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.682800 containerd[1573]: time="2026-04-21T10:19:27.682774778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683135 containerd[1573]: time="2026-04-21T10:19:27.682857463Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:19:27.683135 containerd[1573]: time="2026-04-21T10:19:27.682872686Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:19:27.683135 containerd[1573]: time="2026-04-21T10:19:27.683039714Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:19:27.683135 containerd[1573]: time="2026-04-21T10:19:27.683052453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683135 containerd[1573]: time="2026-04-21T10:19:27.683088676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683135 containerd[1573]: time="2026-04-21T10:19:27.683097239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683371 containerd[1573]: time="2026-04-21T10:19:27.683359700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683400 containerd[1573]: time="2026-04-21T10:19:27.683393698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683426 containerd[1573]: time="2026-04-21T10:19:27.683420456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683449 containerd[1573]: time="2026-04-21T10:19:27.683444142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683564 containerd[1573]: time="2026-04-21T10:19:27.683555504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683728 containerd[1573]: time="2026-04-21T10:19:27.683719398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683865 containerd[1573]: time="2026-04-21T10:19:27.683855624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:19:27.683965 containerd[1573]: time="2026-04-21T10:19:27.683954839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:19:27.684059 containerd[1573]: time="2026-04-21T10:19:27.684050567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:19:27.684220 containerd[1573]: time="2026-04-21T10:19:27.684105531Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:19:27.689693 containerd[1573]: time="2026-04-21T10:19:27.689643455Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:19:27.689796 containerd[1573]: time="2026-04-21T10:19:27.689783801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:19:27.689959 containerd[1573]: time="2026-04-21T10:19:27.689849354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:19:27.689959 containerd[1573]: time="2026-04-21T10:19:27.689862470Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:19:27.689959 containerd[1573]: time="2026-04-21T10:19:27.689874206Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:19:27.690103 containerd[1573]: time="2026-04-21T10:19:27.690092952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691689275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691783170Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691798411Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691810060Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691822063Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691833540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691843389Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691856198Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691868757Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.691972 containerd[1573]: time="2026-04-21T10:19:27.691880702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.692177 containerd[1573]: time="2026-04-21T10:19:27.691892535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.692214 containerd[1573]: time="2026-04-21T10:19:27.692204300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:19:27.692265 containerd[1573]: time="2026-04-21T10:19:27.692254801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.692297 containerd[1573]: time="2026-04-21T10:19:27.692291310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.692333 containerd[1573]: time="2026-04-21T10:19:27.692327266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.692371 containerd[1573]: time="2026-04-21T10:19:27.692365097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.692408 containerd[1573]: time="2026-04-21T10:19:27.692401414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.692439 containerd[1573]: time="2026-04-21T10:19:27.692431178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.692499 containerd[1573]: time="2026-04-21T10:19:27.692462568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692527046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692541856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692556181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692568177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692580028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692590402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692605626Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692627019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692637050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692647003Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692683696Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692698848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:19:27.693232 containerd[1573]: time="2026-04-21T10:19:27.692707285Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:19:27.693446 containerd[1573]: time="2026-04-21T10:19:27.692718522Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:19:27.693446 containerd[1573]: time="2026-04-21T10:19:27.692728431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693446 containerd[1573]: time="2026-04-21T10:19:27.692746932Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:19:27.693446 containerd[1573]: time="2026-04-21T10:19:27.692762153Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:19:27.693446 containerd[1573]: time="2026-04-21T10:19:27.692771103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:19:27.693891 containerd[1573]: time="2026-04-21T10:19:27.693844489Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:19:27.694188 containerd[1573]: time="2026-04-21T10:19:27.694176185Z" level=info msg="Connect containerd service" Apr 21 10:19:27.694246 containerd[1573]: time="2026-04-21T10:19:27.694239549Z" level=info msg="using legacy CRI server" Apr 21 10:19:27.694272 containerd[1573]: time="2026-04-21T10:19:27.694266545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:19:27.694449 containerd[1573]: time="2026-04-21T10:19:27.694437620Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:19:27.695313 containerd[1573]: time="2026-04-21T10:19:27.695291693Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.695595373Z" level=info msg="Start subscribing containerd event" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.695658937Z" level=info msg="Start recovering state" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.695713738Z" level=info msg="Start event monitor" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.695722518Z" level=info msg="Start snapshots syncer" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.695729712Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.695734915Z" level=info msg="Start streaming server" Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.696499743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.696545402Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:19:27.696838 containerd[1573]: time="2026-04-21T10:19:27.696601292Z" level=info msg="containerd successfully booted in 0.048812s" Apr 21 10:19:27.696707 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:19:27.777438 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:19:27.796514 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:19:27.812179 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:19:27.818527 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:19:27.818717 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:19:27.827241 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:19:27.834435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:19:27.842136 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:19:27.845205 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:19:27.847993 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:19:27.906837 tar[1570]: linux-amd64/README.md Apr 21 10:19:27.919334 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:19:28.204068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:28.206648 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:19:28.207624 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:19:28.209028 systemd[1]: Startup finished in 5.943s (kernel) + 3.603s (userspace) = 9.547s. Apr 21 10:19:28.635336 kubelet[1672]: E0421 10:19:28.635253 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:19:28.638094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:19:28.638265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:19:33.010198 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:19:33.022241 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:42432.service - OpenSSH per-connection server daemon (10.0.0.1:42432). Apr 21 10:19:33.057099 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 42432 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.059003 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.067108 systemd-logind[1561]: New session 1 of user core. Apr 21 10:19:33.067826 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:19:33.080271 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:19:33.091180 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:19:33.093033 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:19:33.099326 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:19:33.165713 systemd[1691]: Queued start job for default target default.target. Apr 21 10:19:33.166094 systemd[1691]: Created slice app.slice - User Application Slice. Apr 21 10:19:33.166107 systemd[1691]: Reached target paths.target - Paths. Apr 21 10:19:33.166116 systemd[1691]: Reached target timers.target - Timers. Apr 21 10:19:33.177095 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:19:33.183127 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:19:33.183175 systemd[1691]: Reached target sockets.target - Sockets. Apr 21 10:19:33.183185 systemd[1691]: Reached target basic.target - Basic System. Apr 21 10:19:33.183214 systemd[1691]: Reached target default.target - Main User Target. Apr 21 10:19:33.183234 systemd[1691]: Startup finished in 78ms. Apr 21 10:19:33.183649 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:19:33.184978 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:19:33.248321 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). Apr 21 10:19:33.276126 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.277364 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.281245 systemd-logind[1561]: New session 2 of user core. Apr 21 10:19:33.293227 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:19:33.348870 sshd[1703]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:33.356227 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). Apr 21 10:19:33.356580 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:42440.service: Deactivated successfully. Apr 21 10:19:33.358374 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:19:33.358987 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:19:33.360210 systemd-logind[1561]: Removed session 2. Apr 21 10:19:33.381204 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.382507 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.386253 systemd-logind[1561]: New session 3 of user core. Apr 21 10:19:33.396232 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:19:33.445942 sshd[1708]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:33.457144 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:42458.service - OpenSSH per-connection server daemon (10.0.0.1:42458). Apr 21 10:19:33.457547 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:42450.service: Deactivated successfully. Apr 21 10:19:33.458776 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:19:33.459299 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:19:33.460396 systemd-logind[1561]: Removed session 3. Apr 21 10:19:33.481427 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 42458 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.482630 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.486436 systemd-logind[1561]: New session 4 of user core. Apr 21 10:19:33.500280 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:19:33.553536 sshd[1716]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:33.572128 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). Apr 21 10:19:33.572474 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:42458.service: Deactivated successfully. Apr 21 10:19:33.574658 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:19:33.575060 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:19:33.576080 systemd-logind[1561]: Removed session 4. Apr 21 10:19:33.597086 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.598307 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.601921 systemd-logind[1561]: New session 5 of user core. Apr 21 10:19:33.612146 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:19:33.668313 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:19:33.668568 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:19:33.689740 sudo[1731]: pam_unix(sudo:session): session closed for user root Apr 21 10:19:33.691467 sshd[1724]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:33.706200 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:42464.service - OpenSSH per-connection server daemon (10.0.0.1:42464). Apr 21 10:19:33.706572 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:42460.service: Deactivated successfully. Apr 21 10:19:33.708376 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:19:33.708800 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:19:33.709828 systemd-logind[1561]: Removed session 5. Apr 21 10:19:33.731845 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 42464 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.733040 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.736497 systemd-logind[1561]: New session 6 of user core. Apr 21 10:19:33.752160 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:19:33.805577 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:19:33.805790 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:19:33.809255 sudo[1741]: pam_unix(sudo:session): session closed for user root Apr 21 10:19:33.813770 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:19:33.814023 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:19:33.827136 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:19:33.828643 auditctl[1744]: No rules Apr 21 10:19:33.828862 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:19:33.829072 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:19:33.830878 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:19:33.854937 augenrules[1763]: No rules Apr 21 10:19:33.856120 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:19:33.856843 sudo[1740]: pam_unix(sudo:session): session closed for user root Apr 21 10:19:33.858315 sshd[1733]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:33.868122 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:42476.service - OpenSSH per-connection server daemon (10.0.0.1:42476). Apr 21 10:19:33.868452 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:42464.service: Deactivated successfully. Apr 21 10:19:33.870597 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:19:33.871426 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:19:33.872112 systemd-logind[1561]: Removed session 6. Apr 21 10:19:33.893080 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 42476 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:19:33.894081 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:33.897775 systemd-logind[1561]: New session 7 of user core. Apr 21 10:19:33.907117 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:19:33.958154 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:19:33.958408 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:19:34.190236 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:19:34.190370 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:19:34.415279 dockerd[1794]: time="2026-04-21T10:19:34.415192130Z" level=info msg="Starting up" Apr 21 10:19:34.620035 dockerd[1794]: time="2026-04-21T10:19:34.619955694Z" level=info msg="Loading containers: start." Apr 21 10:19:34.730949 kernel: Initializing XFRM netlink socket Apr 21 10:19:34.798068 systemd-networkd[1260]: docker0: Link UP Apr 21 10:19:34.820698 dockerd[1794]: time="2026-04-21T10:19:34.820634422Z" level=info msg="Loading containers: done." Apr 21 10:19:34.836135 dockerd[1794]: time="2026-04-21T10:19:34.836086055Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:19:34.836227 dockerd[1794]: time="2026-04-21T10:19:34.836179755Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:19:34.836254 dockerd[1794]: time="2026-04-21T10:19:34.836242852Z" level=info msg="Daemon has completed initialization" Apr 21 10:19:34.869869 dockerd[1794]: time="2026-04-21T10:19:34.867998533Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:19:34.869703 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:19:35.260307 containerd[1573]: time="2026-04-21T10:19:35.260215888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:19:36.109570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157506371.mount: Deactivated successfully. Apr 21 10:19:36.798278 containerd[1573]: time="2026-04-21T10:19:36.798208554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:36.799249 containerd[1573]: time="2026-04-21T10:19:36.799194039Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 21 10:19:36.801357 containerd[1573]: time="2026-04-21T10:19:36.801237982Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:36.803747 containerd[1573]: time="2026-04-21T10:19:36.803708231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:36.804647 containerd[1573]: time="2026-04-21T10:19:36.804609345Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.544350998s" Apr 21 10:19:36.804677 containerd[1573]: time="2026-04-21T10:19:36.804649488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:19:36.805393 containerd[1573]: time="2026-04-21T10:19:36.805295765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:19:37.825704 containerd[1573]: time="2026-04-21T10:19:37.825647555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:37.826332 containerd[1573]: time="2026-04-21T10:19:37.826295273Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 21 10:19:37.827990 containerd[1573]: time="2026-04-21T10:19:37.827961714Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:37.831631 containerd[1573]: time="2026-04-21T10:19:37.831567843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:37.833313 containerd[1573]: time="2026-04-21T10:19:37.833270511Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.027936548s" Apr 21 10:19:37.833385 containerd[1573]: time="2026-04-21T10:19:37.833321677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:19:37.835295 containerd[1573]: time="2026-04-21T10:19:37.835208407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:19:38.703236 containerd[1573]: time="2026-04-21T10:19:38.703174435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:38.704092 containerd[1573]: time="2026-04-21T10:19:38.704046690Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 21 10:19:38.705224 containerd[1573]: time="2026-04-21T10:19:38.705173130Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:38.707509 containerd[1573]: time="2026-04-21T10:19:38.707454400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:38.708769 containerd[1573]: time="2026-04-21T10:19:38.708718663Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 873.465184ms" Apr 21 10:19:38.708823 containerd[1573]: time="2026-04-21T10:19:38.708769825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:19:38.709426 containerd[1573]: time="2026-04-21T10:19:38.709374627Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:19:38.787576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:19:38.797110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:19:38.965500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:38.969384 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:19:39.025249 kubelet[2022]: E0421 10:19:39.025216 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:19:39.029969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:19:39.030123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:19:39.645829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376357345.mount: Deactivated successfully. Apr 21 10:19:39.991023 containerd[1573]: time="2026-04-21T10:19:39.990754750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:39.991672 containerd[1573]: time="2026-04-21T10:19:39.991581298Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 21 10:19:39.992654 containerd[1573]: time="2026-04-21T10:19:39.992615875Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:39.994252 containerd[1573]: time="2026-04-21T10:19:39.994219579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:39.995408 containerd[1573]: time="2026-04-21T10:19:39.995320921Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.28590372s" Apr 21 10:19:39.995408 containerd[1573]: time="2026-04-21T10:19:39.995358886Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:19:39.996222 containerd[1573]: time="2026-04-21T10:19:39.996130548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:19:40.444157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633168242.mount: Deactivated successfully. Apr 21 10:19:40.946381 containerd[1573]: time="2026-04-21T10:19:40.946304946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:40.947175 containerd[1573]: time="2026-04-21T10:19:40.947131469Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 21 10:19:40.948624 containerd[1573]: time="2026-04-21T10:19:40.948587492Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:40.951223 containerd[1573]: time="2026-04-21T10:19:40.951184644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:40.951997 containerd[1573]: time="2026-04-21T10:19:40.951968079Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 955.765649ms" Apr 21 10:19:40.952057 containerd[1573]: time="2026-04-21T10:19:40.952001106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:19:40.952617 containerd[1573]: time="2026-04-21T10:19:40.952460321Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:19:41.366150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097860294.mount: Deactivated successfully. Apr 21 10:19:41.371024 containerd[1573]: time="2026-04-21T10:19:41.370980435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:41.371770 containerd[1573]: time="2026-04-21T10:19:41.371721757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 10:19:41.372542 containerd[1573]: time="2026-04-21T10:19:41.372507637Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:41.376293 containerd[1573]: time="2026-04-21T10:19:41.376234834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:41.376997 containerd[1573]: time="2026-04-21T10:19:41.376962626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.480233ms" Apr 21 10:19:41.376997 containerd[1573]: time="2026-04-21T10:19:41.376991821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:19:41.377588 containerd[1573]: time="2026-04-21T10:19:41.377423943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:19:41.832373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847744105.mount: Deactivated successfully. Apr 21 10:19:42.545850 containerd[1573]: time="2026-04-21T10:19:42.545442164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:42.546840 containerd[1573]: time="2026-04-21T10:19:42.546768910Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 21 10:19:42.548257 containerd[1573]: time="2026-04-21T10:19:42.548182774Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:42.550526 containerd[1573]: time="2026-04-21T10:19:42.550455928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:19:42.551332 containerd[1573]: time="2026-04-21T10:19:42.551300645Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.173853336s" Apr 21 10:19:42.551380 containerd[1573]: time="2026-04-21T10:19:42.551335852Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:19:45.644980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:45.655134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:19:45.675690 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-7.scope)... Apr 21 10:19:45.675716 systemd[1]: Reloading... Apr 21 10:19:45.728015 zram_generator::config[2229]: No configuration found. Apr 21 10:19:45.815040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:19:45.859635 systemd[1]: Reloading finished in 183 ms. Apr 21 10:19:45.900407 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:19:45.900454 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:19:45.900674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:45.902572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:19:45.997528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:46.001705 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:19:46.040831 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:19:46.040831 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:19:46.040831 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:19:46.041139 kubelet[2291]: I0421 10:19:46.040926 2291 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:19:46.380179 kubelet[2291]: I0421 10:19:46.380134 2291 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:19:46.380179 kubelet[2291]: I0421 10:19:46.380167 2291 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:19:46.380402 kubelet[2291]: I0421 10:19:46.380371 2291 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:19:46.404431 kubelet[2291]: I0421 10:19:46.404266 2291 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:19:46.406054 kubelet[2291]: E0421 10:19:46.405962 2291 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:19:46.411559 kubelet[2291]: E0421 10:19:46.411508 2291 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:19:46.411559 kubelet[2291]: I0421 10:19:46.411540 2291 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:19:46.414815 kubelet[2291]: I0421 10:19:46.414775 2291 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:19:46.415491 kubelet[2291]: I0421 10:19:46.415428 2291 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:19:46.415696 kubelet[2291]: I0421 10:19:46.415463 2291 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:19:46.415696 kubelet[2291]: I0421 10:19:46.415682 2291 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:19:46.415696 kubelet[2291]: I0421 10:19:46.415689 2291 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:19:46.415822 kubelet[2291]: I0421 10:19:46.415789 2291 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:19:46.419039 kubelet[2291]: I0421 10:19:46.418981 2291 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:19:46.419039 kubelet[2291]: I0421 10:19:46.419003 2291 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:19:46.419039 kubelet[2291]: I0421 10:19:46.419025 2291 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:19:46.420205 kubelet[2291]: I0421 10:19:46.420182 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:19:46.422769 kubelet[2291]: I0421 10:19:46.422731 2291 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:19:46.423145 kubelet[2291]: I0421 10:19:46.423111 2291 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:19:46.424344 kubelet[2291]: W0421 10:19:46.424299 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:19:46.427433 kubelet[2291]: E0421 10:19:46.426861 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:19:46.427433 kubelet[2291]: E0421 10:19:46.427112 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:19:46.428030 kubelet[2291]: I0421 10:19:46.428014 2291 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:19:46.429207 kubelet[2291]: I0421 10:19:46.429197 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:19:46.432017 kubelet[2291]: I0421 10:19:46.432004 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:19:46.432755 kubelet[2291]: I0421 10:19:46.432734 2291 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:19:46.433126 kubelet[2291]: E0421 10:19:46.433110 2291 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:19:46.433436 kubelet[2291]: I0421 10:19:46.433423 2291 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:19:46.433473 kubelet[2291]: I0421 10:19:46.433460 2291 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:19:46.436431 kubelet[2291]: E0421 10:19:46.435572 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:19:46.436431 kubelet[2291]: I0421 10:19:46.435799 2291 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:19:46.436431 kubelet[2291]: I0421 10:19:46.435861 2291 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:19:46.436431 kubelet[2291]: E0421 10:19:46.436132 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Apr 21 10:19:46.438930 kubelet[2291]: I0421 10:19:46.438886 2291 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:19:46.440506 kubelet[2291]: I0421 10:19:46.440443 2291 server.go:1289] "Started kubelet" Apr 21 10:19:46.442154 kubelet[2291]: I0421 10:19:46.442119 2291 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:19:46.442947 kubelet[2291]: I0421 10:19:46.442859 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:19:46.443832 kubelet[2291]: I0421 10:19:46.443283 2291 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:19:46.446887 kubelet[2291]: I0421 10:19:46.446462 2291 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:19:46.449625 kubelet[2291]: E0421 10:19:46.438696 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a857fa81c1f7a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:19:46.428073892 +0000 UTC m=+0.421785713,LastTimestamp:2026-04-21 10:19:46.428073892 +0000 UTC m=+0.421785713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:19:46.452168 kubelet[2291]: I0421 10:19:46.452113 2291 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:19:46.454304 kubelet[2291]: I0421 10:19:46.453956 2291 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:19:46.454304 kubelet[2291]: I0421 10:19:46.454086 2291 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:19:46.454304 kubelet[2291]: I0421 10:19:46.454101 2291 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:19:46.454304 kubelet[2291]: I0421 10:19:46.454107 2291 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:19:46.454304 kubelet[2291]: E0421 10:19:46.454156 2291 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:19:46.456005 kubelet[2291]: E0421 10:19:46.455874 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:19:46.460790 kubelet[2291]: I0421 10:19:46.460773 2291 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:19:46.460790 kubelet[2291]: I0421 10:19:46.460788 2291 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:19:46.460879 kubelet[2291]: I0421 10:19:46.460800 2291 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:19:46.491355 kubelet[2291]: I0421 10:19:46.491276 2291 policy_none.go:49] "None policy: Start" Apr 21 10:19:46.491355 kubelet[2291]: I0421 10:19:46.491318 2291 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:19:46.491355 kubelet[2291]: I0421 10:19:46.491329 2291 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:19:46.499107 kubelet[2291]: E0421 10:19:46.499062 2291 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:19:46.499347 kubelet[2291]: I0421 10:19:46.499313 2291 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:19:46.499415 kubelet[2291]: I0421 10:19:46.499346 2291 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:19:46.499690 kubelet[2291]: I0421 10:19:46.499660 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:19:46.500363 kubelet[2291]: E0421 10:19:46.500341 2291 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:19:46.500406 kubelet[2291]: E0421 10:19:46.500393 2291 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:19:46.563960 kubelet[2291]: E0421 10:19:46.563891 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:46.566176 kubelet[2291]: E0421 10:19:46.565555 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:46.567778 kubelet[2291]: E0421 10:19:46.567750 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:46.601080 kubelet[2291]: I0421 10:19:46.601031 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:19:46.601446 kubelet[2291]: E0421 10:19:46.601387 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 21 10:19:46.635232 kubelet[2291]: I0421 10:19:46.635010 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd6c380c03a7a50732d764825a823160-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd6c380c03a7a50732d764825a823160\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:46.635232 kubelet[2291]: I0421 10:19:46.635057 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:46.635232 kubelet[2291]: I0421 10:19:46.635090 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:46.635232 kubelet[2291]: I0421 10:19:46.635116 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:46.635232 kubelet[2291]: I0421 10:19:46.635168 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:46.635425 kubelet[2291]: I0421 10:19:46.635192 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd6c380c03a7a50732d764825a823160-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd6c380c03a7a50732d764825a823160\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:46.635425 kubelet[2291]: I0421 10:19:46.635216 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6c380c03a7a50732d764825a823160-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd6c380c03a7a50732d764825a823160\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:46.635425 kubelet[2291]: I0421 10:19:46.635237 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:46.635425 kubelet[2291]: I0421 10:19:46.635261 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:46.637563 kubelet[2291]: E0421 10:19:46.637539 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Apr 21 10:19:46.803152 kubelet[2291]: I0421 10:19:46.803058 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:19:46.803512 kubelet[2291]: E0421 10:19:46.803458 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 21 10:19:46.864498 kubelet[2291]: E0421 10:19:46.864424 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:46.865354 containerd[1573]: time="2026-04-21T10:19:46.865250313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd6c380c03a7a50732d764825a823160,Namespace:kube-system,Attempt:0,}" Apr 21 10:19:46.866530 kubelet[2291]: E0421 10:19:46.866463 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:46.866968 containerd[1573]: time="2026-04-21T10:19:46.866931903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 21 10:19:46.868172 kubelet[2291]: E0421 10:19:46.868131 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:46.868485 containerd[1573]: time="2026-04-21T10:19:46.868457220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 21 10:19:47.039057 kubelet[2291]: E0421 10:19:47.038993 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Apr 21 10:19:47.205250 kubelet[2291]: I0421 10:19:47.205212 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:19:47.205533 kubelet[2291]: E0421 10:19:47.205505 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 21 10:19:47.293555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706069985.mount: Deactivated successfully. Apr 21 10:19:47.300129 containerd[1573]: time="2026-04-21T10:19:47.300090482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:19:47.301818 containerd[1573]: time="2026-04-21T10:19:47.301743417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:19:47.302792 containerd[1573]: time="2026-04-21T10:19:47.302759017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:19:47.303663 containerd[1573]: time="2026-04-21T10:19:47.303588450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:19:47.304348 containerd[1573]: time="2026-04-21T10:19:47.304313971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:19:47.305401 containerd[1573]: time="2026-04-21T10:19:47.305368359Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:19:47.306138 containerd[1573]: time="2026-04-21T10:19:47.306084169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:19:47.308008 containerd[1573]: time="2026-04-21T10:19:47.307981632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:19:47.309870 containerd[1573]: time="2026-04-21T10:19:47.309835438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.836213ms" Apr 21 10:19:47.310648 containerd[1573]: time="2026-04-21T10:19:47.310585152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.068685ms" Apr 21 10:19:47.311351 containerd[1573]: time="2026-04-21T10:19:47.311289834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.940709ms" Apr 21 10:19:47.413261 containerd[1573]: time="2026-04-21T10:19:47.413013246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:19:47.413261 containerd[1573]: time="2026-04-21T10:19:47.413066723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:19:47.413261 containerd[1573]: time="2026-04-21T10:19:47.413078860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:47.413261 containerd[1573]: time="2026-04-21T10:19:47.413147732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:47.415229 containerd[1573]: time="2026-04-21T10:19:47.415129588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:19:47.415229 containerd[1573]: time="2026-04-21T10:19:47.415227582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:19:47.415388 containerd[1573]: time="2026-04-21T10:19:47.415244806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:47.415388 containerd[1573]: time="2026-04-21T10:19:47.415306272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:47.425237 containerd[1573]: time="2026-04-21T10:19:47.424984480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:19:47.425237 containerd[1573]: time="2026-04-21T10:19:47.425113668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:19:47.425237 containerd[1573]: time="2026-04-21T10:19:47.425126776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:47.425658 containerd[1573]: time="2026-04-21T10:19:47.425376814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:47.603349 containerd[1573]: time="2026-04-21T10:19:47.603249912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"856ea9f6b225c5afba9633e321f0c18f3926fb55fad3a358d8a3c26c0ef7a1b8\"" Apr 21 10:19:47.604144 containerd[1573]: time="2026-04-21T10:19:47.603883262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2a2e51b13f13b942a0624d73d2dd0f92e131a13e25637ea16c53e01299c9f2f\"" Apr 21 10:19:47.604744 kubelet[2291]: E0421 10:19:47.604729 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:47.605877 kubelet[2291]: E0421 10:19:47.605749 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:47.608471 containerd[1573]: time="2026-04-21T10:19:47.608248789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd6c380c03a7a50732d764825a823160,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa8327f3cc3c1205870ed4833f5f8fd0caf85b5f7523296284dc9abf0167e7a7\"" Apr 21 10:19:47.608773 kubelet[2291]: E0421 10:19:47.608754 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:47.609875 containerd[1573]: time="2026-04-21T10:19:47.609804924Z" level=info msg="CreateContainer within sandbox \"d2a2e51b13f13b942a0624d73d2dd0f92e131a13e25637ea16c53e01299c9f2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:19:47.611969 containerd[1573]: time="2026-04-21T10:19:47.611932738Z" level=info msg="CreateContainer within sandbox \"856ea9f6b225c5afba9633e321f0c18f3926fb55fad3a358d8a3c26c0ef7a1b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:19:47.614842 containerd[1573]: time="2026-04-21T10:19:47.614772419Z" level=info msg="CreateContainer within sandbox \"aa8327f3cc3c1205870ed4833f5f8fd0caf85b5f7523296284dc9abf0167e7a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:19:47.622775 containerd[1573]: time="2026-04-21T10:19:47.622743129Z" level=info msg="CreateContainer within sandbox \"d2a2e51b13f13b942a0624d73d2dd0f92e131a13e25637ea16c53e01299c9f2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2bffb7bbc1dc3860f0b5dfcc35ae6272a9cfb9382573d600187f3ed8aa8d73d3\"" Apr 21 10:19:47.623419 containerd[1573]: time="2026-04-21T10:19:47.623365177Z" level=info msg="StartContainer for \"2bffb7bbc1dc3860f0b5dfcc35ae6272a9cfb9382573d600187f3ed8aa8d73d3\"" Apr 21 10:19:47.636173 containerd[1573]: time="2026-04-21T10:19:47.636132176Z" level=info msg="CreateContainer within sandbox \"aa8327f3cc3c1205870ed4833f5f8fd0caf85b5f7523296284dc9abf0167e7a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c5efed5ee53d5d0aa1dad3bd6784fe2727a89e7a47327a62e87c31b0fed37eb7\"" Apr 21 10:19:47.636647 containerd[1573]: time="2026-04-21T10:19:47.636623558Z" level=info msg="StartContainer for \"c5efed5ee53d5d0aa1dad3bd6784fe2727a89e7a47327a62e87c31b0fed37eb7\"" Apr 21 10:19:47.638948 containerd[1573]: time="2026-04-21T10:19:47.638859727Z" level=info msg="CreateContainer within sandbox \"856ea9f6b225c5afba9633e321f0c18f3926fb55fad3a358d8a3c26c0ef7a1b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"775f58cd9dfa8a56bf5df61d636d7e7fe070b3738bdd9f2203c6de76a9a194e5\"" Apr 21 10:19:47.639516 containerd[1573]: time="2026-04-21T10:19:47.639489066Z" level=info msg="StartContainer for \"775f58cd9dfa8a56bf5df61d636d7e7fe070b3738bdd9f2203c6de76a9a194e5\"" Apr 21 10:19:47.706474 containerd[1573]: time="2026-04-21T10:19:47.706434520Z" level=info msg="StartContainer for \"2bffb7bbc1dc3860f0b5dfcc35ae6272a9cfb9382573d600187f3ed8aa8d73d3\" returns successfully" Apr 21 10:19:47.713131 containerd[1573]: time="2026-04-21T10:19:47.713095443Z" level=info msg="StartContainer for \"c5efed5ee53d5d0aa1dad3bd6784fe2727a89e7a47327a62e87c31b0fed37eb7\" returns successfully" Apr 21 10:19:47.721865 containerd[1573]: time="2026-04-21T10:19:47.721818853Z" level=info msg="StartContainer for \"775f58cd9dfa8a56bf5df61d636d7e7fe070b3738bdd9f2203c6de76a9a194e5\" returns successfully" Apr 21 10:19:47.753023 kubelet[2291]: E0421 10:19:47.752456 2291 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:19:48.009042 kubelet[2291]: I0421 10:19:48.008089 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:19:48.116306 kernel: hrtimer: interrupt took 16217727 ns Apr 21 10:19:48.468014 kubelet[2291]: E0421 10:19:48.467970 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:48.468866 kubelet[2291]: E0421 10:19:48.468087 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:48.472505 kubelet[2291]: E0421 10:19:48.472427 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:48.472632 kubelet[2291]: E0421 10:19:48.472600 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:48.475228 kubelet[2291]: E0421 10:19:48.475178 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:48.475330 kubelet[2291]: E0421 10:19:48.475294 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:49.478275 kubelet[2291]: E0421 10:19:49.478207 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:49.478679 kubelet[2291]: E0421 10:19:49.478394 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:49.479195 kubelet[2291]: E0421 10:19:49.479180 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:49.479290 kubelet[2291]: E0421 10:19:49.479261 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:49.723081 kubelet[2291]: E0421 10:19:49.723001 2291 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:19:49.723387 kubelet[2291]: E0421 10:19:49.723339 2291 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:49.782310 kubelet[2291]: E0421 10:19:49.782203 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:19:49.880959 kubelet[2291]: I0421 10:19:49.880800 2291 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:19:49.880959 kubelet[2291]: E0421 10:19:49.880936 2291 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:19:49.933333 kubelet[2291]: I0421 10:19:49.933294 2291 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:49.942361 kubelet[2291]: E0421 10:19:49.942281 2291 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:49.942361 kubelet[2291]: I0421 10:19:49.942311 2291 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:49.943713 kubelet[2291]: E0421 10:19:49.943682 2291 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:49.943713 kubelet[2291]: I0421 10:19:49.943707 2291 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:49.944779 kubelet[2291]: E0421 10:19:49.944764 2291 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:50.424784 kubelet[2291]: I0421 10:19:50.424668 2291 apiserver.go:52] "Watching apiserver" Apr 21 10:19:50.434616 kubelet[2291]: I0421 10:19:50.434564 2291 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:19:51.940602 systemd[1]: Reloading requested from client PID 2577 ('systemctl') (unit session-7.scope)... Apr 21 10:19:51.940626 systemd[1]: Reloading... Apr 21 10:19:51.985966 zram_generator::config[2616]: No configuration found. Apr 21 10:19:52.065948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:19:52.113822 systemd[1]: Reloading finished in 172 ms. Apr 21 10:19:52.138929 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:19:52.158617 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:19:52.158946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:52.174168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:19:52.269318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:19:52.273231 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:19:52.314500 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:19:52.314500 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:19:52.314500 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:19:52.314500 kubelet[2671]: I0421 10:19:52.313074 2671 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:19:52.318208 kubelet[2671]: I0421 10:19:52.318171 2671 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:19:52.318208 kubelet[2671]: I0421 10:19:52.318201 2671 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:19:52.318366 kubelet[2671]: I0421 10:19:52.318336 2671 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:19:52.319306 kubelet[2671]: I0421 10:19:52.319278 2671 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:19:52.322079 kubelet[2671]: I0421 10:19:52.322047 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:19:52.328416 kubelet[2671]: E0421 10:19:52.328007 2671 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:19:52.328416 kubelet[2671]: I0421 10:19:52.328027 2671 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:19:52.333178 kubelet[2671]: I0421 10:19:52.333156 2671 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:19:52.334839 kubelet[2671]: I0421 10:19:52.333586 2671 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:19:52.334839 kubelet[2671]: I0421 10:19:52.333609 2671 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:19:52.334839 kubelet[2671]: I0421 10:19:52.333754 2671 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:19:52.334839 kubelet[2671]: I0421 10:19:52.333762 2671 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:19:52.334839 kubelet[2671]: I0421 10:19:52.333798 2671 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:19:52.335053 kubelet[2671]: I0421 10:19:52.333953 2671 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:19:52.335053 kubelet[2671]: I0421 10:19:52.333981 2671 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:19:52.335053 kubelet[2671]: I0421 10:19:52.334027 2671 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:19:52.335053 kubelet[2671]: I0421 10:19:52.334054 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:19:52.336011 kubelet[2671]: I0421 10:19:52.335980 2671 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:19:52.336887 kubelet[2671]: I0421 10:19:52.336787 2671 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:19:52.347180 kubelet[2671]: I0421 10:19:52.346225 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:19:52.347180 kubelet[2671]: I0421 10:19:52.346260 2671 server.go:1289] "Started kubelet" Apr 21 10:19:52.347180 kubelet[2671]: I0421 10:19:52.347017 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:19:52.352589 kubelet[2671]: I0421 10:19:52.351209 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:19:52.352589 kubelet[2671]: I0421 10:19:52.351214 2671 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:19:52.354498 kubelet[2671]: I0421 10:19:52.354426 2671 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:19:52.359476 kubelet[2671]: I0421 10:19:52.359438 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:19:52.361754 kubelet[2671]: I0421 10:19:52.361743 2671 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:19:52.363131 kubelet[2671]: I0421 10:19:52.362953 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:19:52.363867 kubelet[2671]: I0421 10:19:52.363853 2671 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:19:52.366742 kubelet[2671]: E0421 10:19:52.366724 2671 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:19:52.367389 kubelet[2671]: I0421 10:19:52.367244 2671 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:19:52.367389 kubelet[2671]: I0421 10:19:52.367310 2671 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:19:52.368739 kubelet[2671]: I0421 10:19:52.368694 2671 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:19:52.369826 kubelet[2671]: I0421 10:19:52.369557 2671 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:19:52.378981 kubelet[2671]: I0421 10:19:52.378952 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:19:52.380019 kubelet[2671]: I0421 10:19:52.379991 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:19:52.380074 kubelet[2671]: I0421 10:19:52.380021 2671 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:19:52.380074 kubelet[2671]: I0421 10:19:52.380039 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:19:52.380074 kubelet[2671]: I0421 10:19:52.380046 2671 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:19:52.380150 kubelet[2671]: E0421 10:19:52.380081 2671 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:19:52.410424 kubelet[2671]: I0421 10:19:52.410386 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:19:52.410424 kubelet[2671]: I0421 10:19:52.410409 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:19:52.410424 kubelet[2671]: I0421 10:19:52.410423 2671 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:19:52.410597 kubelet[2671]: I0421 10:19:52.410515 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:19:52.410597 kubelet[2671]: I0421 10:19:52.410522 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:19:52.410597 kubelet[2671]: I0421 10:19:52.410535 2671 policy_none.go:49] "None policy: Start" Apr 21 10:19:52.410597 kubelet[2671]: I0421 10:19:52.410543 2671 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:19:52.410597 kubelet[2671]: I0421 10:19:52.410549 2671 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:19:52.410693 kubelet[2671]: I0421 10:19:52.410610 2671 state_mem.go:75] "Updated machine memory state" Apr 21 10:19:52.412182 kubelet[2671]: E0421 10:19:52.411487 2671 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:19:52.412182 kubelet[2671]: I0421 10:19:52.411611 2671 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:19:52.412182 kubelet[2671]: I0421 10:19:52.411618 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:19:52.412182 kubelet[2671]: I0421 10:19:52.411781 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:19:52.412552 kubelet[2671]: E0421 10:19:52.412540 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:19:52.481861 kubelet[2671]: I0421 10:19:52.481791 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:52.481861 kubelet[2671]: I0421 10:19:52.481845 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:52.483095 kubelet[2671]: I0421 10:19:52.483063 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:52.518757 kubelet[2671]: I0421 10:19:52.518722 2671 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:19:52.526508 kubelet[2671]: I0421 10:19:52.526379 2671 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 10:19:52.526508 kubelet[2671]: I0421 10:19:52.526450 2671 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:19:52.664291 kubelet[2671]: I0421 10:19:52.664237 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:52.664291 kubelet[2671]: I0421 10:19:52.664299 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd6c380c03a7a50732d764825a823160-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd6c380c03a7a50732d764825a823160\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:52.664291 kubelet[2671]: I0421 10:19:52.664317 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6c380c03a7a50732d764825a823160-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd6c380c03a7a50732d764825a823160\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:52.664559 kubelet[2671]: I0421 10:19:52.664331 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:52.664559 kubelet[2671]: I0421 10:19:52.664347 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:52.664810 kubelet[2671]: I0421 10:19:52.664756 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:52.664810 kubelet[2671]: I0421 10:19:52.664787 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd6c380c03a7a50732d764825a823160-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd6c380c03a7a50732d764825a823160\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:52.664810 kubelet[2671]: I0421 10:19:52.664804 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:52.664858 kubelet[2671]: I0421 10:19:52.664817 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:52.789350 kubelet[2671]: E0421 10:19:52.789297 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:52.789468 kubelet[2671]: E0421 10:19:52.789443 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:52.789601 kubelet[2671]: E0421 10:19:52.789562 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:53.335259 kubelet[2671]: I0421 10:19:53.335210 2671 apiserver.go:52] "Watching apiserver" Apr 21 10:19:53.365010 kubelet[2671]: I0421 10:19:53.364971 2671 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:19:53.392351 kubelet[2671]: I0421 10:19:53.391699 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:53.392351 kubelet[2671]: I0421 10:19:53.391733 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:53.392351 kubelet[2671]: I0421 10:19:53.391776 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:53.398274 kubelet[2671]: E0421 10:19:53.398195 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:19:53.399056 kubelet[2671]: E0421 10:19:53.398336 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:53.399056 kubelet[2671]: E0421 10:19:53.398708 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:19:53.399056 kubelet[2671]: E0421 10:19:53.398785 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:53.399056 kubelet[2671]: E0421 10:19:53.398841 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:19:53.399056 kubelet[2671]: E0421 10:19:53.399013 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:53.421060 kubelet[2671]: I0421 10:19:53.420874 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.420862076 podStartE2EDuration="1.420862076s" podCreationTimestamp="2026-04-21 10:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:19:53.414766264 +0000 UTC m=+1.135221127" watchObservedRunningTime="2026-04-21 10:19:53.420862076 +0000 UTC m=+1.141316934" Apr 21 10:19:53.428192 kubelet[2671]: I0421 10:19:53.428158 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.428151982 podStartE2EDuration="1.428151982s" podCreationTimestamp="2026-04-21 10:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:19:53.427998043 +0000 UTC m=+1.148452905" watchObservedRunningTime="2026-04-21 10:19:53.428151982 +0000 UTC m=+1.148606843" Apr 21 10:19:53.428298 kubelet[2671]: I0421 10:19:53.428209 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.42820679 podStartE2EDuration="1.42820679s" podCreationTimestamp="2026-04-21 10:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:19:53.421138557 +0000 UTC m=+1.141593407" watchObservedRunningTime="2026-04-21 10:19:53.42820679 +0000 UTC m=+1.148661650" Apr 21 10:19:54.393391 kubelet[2671]: E0421 10:19:54.393340 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:54.394096 kubelet[2671]: E0421 10:19:54.393754 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:54.394096 kubelet[2671]: E0421 10:19:54.393956 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:55.437404 kubelet[2671]: E0421 10:19:55.437343 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:56.488406 kubelet[2671]: E0421 10:19:56.488318 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:57.072304 kubelet[2671]: I0421 10:19:57.072232 2671 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:19:57.072564 containerd[1573]: time="2026-04-21T10:19:57.072497634Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:19:57.072844 kubelet[2671]: I0421 10:19:57.072759 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:19:57.799524 kubelet[2671]: I0421 10:19:57.799479 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9523a49d-c665-4be9-a995-557ac9c78b2b-kube-proxy\") pod \"kube-proxy-tqhcj\" (UID: \"9523a49d-c665-4be9-a995-557ac9c78b2b\") " pod="kube-system/kube-proxy-tqhcj" Apr 21 10:19:57.799524 kubelet[2671]: I0421 10:19:57.799521 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9523a49d-c665-4be9-a995-557ac9c78b2b-xtables-lock\") pod \"kube-proxy-tqhcj\" (UID: \"9523a49d-c665-4be9-a995-557ac9c78b2b\") " pod="kube-system/kube-proxy-tqhcj" Apr 21 10:19:57.799524 kubelet[2671]: I0421 10:19:57.799538 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9523a49d-c665-4be9-a995-557ac9c78b2b-lib-modules\") pod \"kube-proxy-tqhcj\" (UID: \"9523a49d-c665-4be9-a995-557ac9c78b2b\") " pod="kube-system/kube-proxy-tqhcj" Apr 21 10:19:57.799961 kubelet[2671]: I0421 10:19:57.799562 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcbjm\" (UniqueName: \"kubernetes.io/projected/9523a49d-c665-4be9-a995-557ac9c78b2b-kube-api-access-wcbjm\") pod \"kube-proxy-tqhcj\" (UID: \"9523a49d-c665-4be9-a995-557ac9c78b2b\") " pod="kube-system/kube-proxy-tqhcj" Apr 21 10:19:58.012421 kubelet[2671]: E0421 10:19:58.012363 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:58.013034 containerd[1573]: time="2026-04-21T10:19:58.012946089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqhcj,Uid:9523a49d-c665-4be9-a995-557ac9c78b2b,Namespace:kube-system,Attempt:0,}" Apr 21 10:19:58.034751 containerd[1573]: time="2026-04-21T10:19:58.034643935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:19:58.035163 containerd[1573]: time="2026-04-21T10:19:58.035120949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:19:58.035232 containerd[1573]: time="2026-04-21T10:19:58.035141342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:58.035301 containerd[1573]: time="2026-04-21T10:19:58.035200949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:58.064815 containerd[1573]: time="2026-04-21T10:19:58.064729337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqhcj,Uid:9523a49d-c665-4be9-a995-557ac9c78b2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"794d5b4765d7d32ab39b55305edbe761589ff5a8ada21e899a3bb8947e56a455\"" Apr 21 10:19:58.065590 kubelet[2671]: E0421 10:19:58.065399 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:58.070090 containerd[1573]: time="2026-04-21T10:19:58.070054336Z" level=info msg="CreateContainer within sandbox \"794d5b4765d7d32ab39b55305edbe761589ff5a8ada21e899a3bb8947e56a455\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:19:58.084383 containerd[1573]: time="2026-04-21T10:19:58.084347258Z" level=info msg="CreateContainer within sandbox \"794d5b4765d7d32ab39b55305edbe761589ff5a8ada21e899a3bb8947e56a455\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6924fcfa8d4042ee9c673e4dae001fb38245534bff62202f91db05904181c9a\"" Apr 21 10:19:58.086332 containerd[1573]: time="2026-04-21T10:19:58.085363648Z" level=info msg="StartContainer for \"b6924fcfa8d4042ee9c673e4dae001fb38245534bff62202f91db05904181c9a\"" Apr 21 10:19:58.136593 containerd[1573]: time="2026-04-21T10:19:58.136366992Z" level=info msg="StartContainer for \"b6924fcfa8d4042ee9c673e4dae001fb38245534bff62202f91db05904181c9a\" returns successfully" Apr 21 10:19:58.400667 kubelet[2671]: E0421 10:19:58.400347 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:58.402490 kubelet[2671]: I0421 10:19:58.402442 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23b96a1d-0b43-4aac-8b7d-4e407bf9fb6d-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-g6pbm\" (UID: \"23b96a1d-0b43-4aac-8b7d-4e407bf9fb6d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-g6pbm" Apr 21 10:19:58.402490 kubelet[2671]: I0421 10:19:58.402475 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmqcp\" (UniqueName: \"kubernetes.io/projected/23b96a1d-0b43-4aac-8b7d-4e407bf9fb6d-kube-api-access-cmqcp\") pod \"tigera-operator-6bf85f8dd-g6pbm\" (UID: \"23b96a1d-0b43-4aac-8b7d-4e407bf9fb6d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-g6pbm" Apr 21 10:19:58.434401 kubelet[2671]: E0421 10:19:58.434374 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:58.448079 kubelet[2671]: I0421 10:19:58.448007 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqhcj" podStartSLOduration=1.447994173 podStartE2EDuration="1.447994173s" podCreationTimestamp="2026-04-21 10:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:19:58.411292621 +0000 UTC m=+6.131747482" watchObservedRunningTime="2026-04-21 10:19:58.447994173 +0000 UTC m=+6.168449034" Apr 21 10:19:58.529545 containerd[1573]: time="2026-04-21T10:19:58.529502061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-g6pbm,Uid:23b96a1d-0b43-4aac-8b7d-4e407bf9fb6d,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:19:58.558147 containerd[1573]: time="2026-04-21T10:19:58.557443311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:19:58.558147 containerd[1573]: time="2026-04-21T10:19:58.558105117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:19:58.558147 containerd[1573]: time="2026-04-21T10:19:58.558116260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:58.558354 containerd[1573]: time="2026-04-21T10:19:58.558189127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:58.601463 containerd[1573]: time="2026-04-21T10:19:58.601409941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-g6pbm,Uid:23b96a1d-0b43-4aac-8b7d-4e407bf9fb6d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e585550c427ca32f4b872348341920b79c7ee8bc0b56b03fae10f1536e2040db\"" Apr 21 10:19:58.603024 containerd[1573]: time="2026-04-21T10:19:58.602996265Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:19:59.404018 kubelet[2671]: E0421 10:19:59.403958 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:19:59.871758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660415441.mount: Deactivated successfully. Apr 21 10:20:00.649184 containerd[1573]: time="2026-04-21T10:20:00.649080776Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:00.649553 containerd[1573]: time="2026-04-21T10:20:00.649500231Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:20:00.650567 containerd[1573]: time="2026-04-21T10:20:00.650522061Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:00.652553 containerd[1573]: time="2026-04-21T10:20:00.652518017Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:00.652998 containerd[1573]: time="2026-04-21T10:20:00.652976389Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.049945046s" Apr 21 10:20:00.653026 containerd[1573]: time="2026-04-21T10:20:00.653005774Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:20:00.657838 containerd[1573]: time="2026-04-21T10:20:00.657675246Z" level=info msg="CreateContainer within sandbox \"e585550c427ca32f4b872348341920b79c7ee8bc0b56b03fae10f1536e2040db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:20:00.672746 containerd[1573]: time="2026-04-21T10:20:00.672604949Z" level=info msg="CreateContainer within sandbox \"e585550c427ca32f4b872348341920b79c7ee8bc0b56b03fae10f1536e2040db\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dec6dccdef5bee00932ef2ef90a5c64d0118d46595f978838f2dab076a1aa34d\"" Apr 21 10:20:00.673974 containerd[1573]: time="2026-04-21T10:20:00.673347994Z" level=info msg="StartContainer for \"dec6dccdef5bee00932ef2ef90a5c64d0118d46595f978838f2dab076a1aa34d\"" Apr 21 10:20:00.715286 containerd[1573]: time="2026-04-21T10:20:00.715248840Z" level=info msg="StartContainer for \"dec6dccdef5bee00932ef2ef90a5c64d0118d46595f978838f2dab076a1aa34d\" returns successfully" Apr 21 10:20:01.419948 kubelet[2671]: I0421 10:20:01.419864 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-g6pbm" podStartSLOduration=1.368672573 podStartE2EDuration="3.419844148s" podCreationTimestamp="2026-04-21 10:19:58 +0000 UTC" firstStartedPulling="2026-04-21 10:19:58.602575242 +0000 UTC m=+6.323030093" lastFinishedPulling="2026-04-21 10:20:00.653746817 +0000 UTC m=+8.374201668" observedRunningTime="2026-04-21 10:20:01.41973966 +0000 UTC m=+9.140194521" watchObservedRunningTime="2026-04-21 10:20:01.419844148 +0000 UTC m=+9.140299010" Apr 21 10:20:05.443508 kubelet[2671]: E0421 10:20:05.443457 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:05.688790 sudo[1776]: pam_unix(sudo:session): session closed for user root Apr 21 10:20:05.690692 sshd[1769]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:05.696787 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:42476.service: Deactivated successfully. Apr 21 10:20:05.699862 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:20:05.702001 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:20:05.710394 systemd-logind[1561]: Removed session 7. Apr 21 10:20:06.420684 kubelet[2671]: E0421 10:20:06.418060 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:06.500333 kubelet[2671]: E0421 10:20:06.500287 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:07.214218 kubelet[2671]: I0421 10:20:07.214151 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-flexvol-driver-host\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214218 kubelet[2671]: I0421 10:20:07.214222 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-lib-modules\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214355 kubelet[2671]: I0421 10:20:07.214237 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9564db94-38fa-47dd-a968-4898c33ecc0e-tigera-ca-bundle\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214355 kubelet[2671]: I0421 10:20:07.214254 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-var-lib-calico\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214355 kubelet[2671]: I0421 10:20:07.214268 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a279dd50-1853-4439-a978-8dfb341a7ca9-typha-certs\") pod \"calico-typha-59d6fc5986-xcldz\" (UID: \"a279dd50-1853-4439-a978-8dfb341a7ca9\") " pod="calico-system/calico-typha-59d6fc5986-xcldz" Apr 21 10:20:07.214355 kubelet[2671]: I0421 10:20:07.214282 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-bpffs\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214355 kubelet[2671]: I0421 10:20:07.214294 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-policysync\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214462 kubelet[2671]: I0421 10:20:07.214306 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-cni-net-dir\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214462 kubelet[2671]: I0421 10:20:07.214318 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-var-run-calico\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214462 kubelet[2671]: I0421 10:20:07.214329 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-nodeproc\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214462 kubelet[2671]: I0421 10:20:07.214341 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-sys-fs\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214462 kubelet[2671]: I0421 10:20:07.214351 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-cni-bin-dir\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214557 kubelet[2671]: I0421 10:20:07.214360 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-cni-log-dir\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214557 kubelet[2671]: I0421 10:20:07.214371 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9564db94-38fa-47dd-a968-4898c33ecc0e-node-certs\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.214557 kubelet[2671]: I0421 10:20:07.214382 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a279dd50-1853-4439-a978-8dfb341a7ca9-tigera-ca-bundle\") pod \"calico-typha-59d6fc5986-xcldz\" (UID: \"a279dd50-1853-4439-a978-8dfb341a7ca9\") " pod="calico-system/calico-typha-59d6fc5986-xcldz" Apr 21 10:20:07.214557 kubelet[2671]: I0421 10:20:07.214393 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtsjr\" (UniqueName: \"kubernetes.io/projected/a279dd50-1853-4439-a978-8dfb341a7ca9-kube-api-access-jtsjr\") pod \"calico-typha-59d6fc5986-xcldz\" (UID: \"a279dd50-1853-4439-a978-8dfb341a7ca9\") " pod="calico-system/calico-typha-59d6fc5986-xcldz" Apr 21 10:20:07.314258 kubelet[2671]: E0421 10:20:07.314181 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:07.315944 kubelet[2671]: I0421 10:20:07.314528 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sn79\" (UniqueName: \"kubernetes.io/projected/9564db94-38fa-47dd-a968-4898c33ecc0e-kube-api-access-2sn79\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.315944 kubelet[2671]: I0421 10:20:07.314612 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9564db94-38fa-47dd-a968-4898c33ecc0e-xtables-lock\") pod \"calico-node-68tnv\" (UID: \"9564db94-38fa-47dd-a968-4898c33ecc0e\") " pod="calico-system/calico-node-68tnv" Apr 21 10:20:07.316609 kubelet[2671]: E0421 10:20:07.316469 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.316679 kubelet[2671]: W0421 10:20:07.316626 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.316679 kubelet[2671]: E0421 10:20:07.316640 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.324977 kubelet[2671]: E0421 10:20:07.324353 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.324977 kubelet[2671]: W0421 10:20:07.324366 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.324977 kubelet[2671]: E0421 10:20:07.324379 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.325788 kubelet[2671]: E0421 10:20:07.325490 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.325788 kubelet[2671]: W0421 10:20:07.325501 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.325788 kubelet[2671]: E0421 10:20:07.325548 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.329749 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.330866 kubelet[2671]: W0421 10:20:07.329764 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.329819 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.330089 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.330866 kubelet[2671]: W0421 10:20:07.330095 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.330102 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.330240 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.330866 kubelet[2671]: W0421 10:20:07.330245 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.330277 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.330866 kubelet[2671]: E0421 10:20:07.330508 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.331169 kubelet[2671]: W0421 10:20:07.330514 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.331169 kubelet[2671]: E0421 10:20:07.330520 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.331169 kubelet[2671]: E0421 10:20:07.330772 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.331169 kubelet[2671]: W0421 10:20:07.330779 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.331169 kubelet[2671]: E0421 10:20:07.330786 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.331169 kubelet[2671]: E0421 10:20:07.330984 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.331169 kubelet[2671]: W0421 10:20:07.330989 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.331169 kubelet[2671]: E0421 10:20:07.330995 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.331360 kubelet[2671]: E0421 10:20:07.331188 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.331360 kubelet[2671]: W0421 10:20:07.331193 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.331360 kubelet[2671]: E0421 10:20:07.331199 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.331951 kubelet[2671]: E0421 10:20:07.331874 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.331951 kubelet[2671]: W0421 10:20:07.331888 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.331951 kubelet[2671]: E0421 10:20:07.331922 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.332052 kubelet[2671]: E0421 10:20:07.332043 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.332052 kubelet[2671]: W0421 10:20:07.332047 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.332108 kubelet[2671]: E0421 10:20:07.332053 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.332251 kubelet[2671]: E0421 10:20:07.332240 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.332366 kubelet[2671]: W0421 10:20:07.332293 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.332366 kubelet[2671]: E0421 10:20:07.332308 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.332591 kubelet[2671]: E0421 10:20:07.332575 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.332591 kubelet[2671]: W0421 10:20:07.332589 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.332705 kubelet[2671]: E0421 10:20:07.332596 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.334018 kubelet[2671]: E0421 10:20:07.333618 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.334018 kubelet[2671]: W0421 10:20:07.333634 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.334018 kubelet[2671]: E0421 10:20:07.333642 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.335059 kubelet[2671]: E0421 10:20:07.334971 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.335059 kubelet[2671]: W0421 10:20:07.334983 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.335059 kubelet[2671]: E0421 10:20:07.334993 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.337438 kubelet[2671]: E0421 10:20:07.337009 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.337438 kubelet[2671]: W0421 10:20:07.337017 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.337438 kubelet[2671]: E0421 10:20:07.337025 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.415433 kubelet[2671]: E0421 10:20:07.415406 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.415667 kubelet[2671]: W0421 10:20:07.415621 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.415739 kubelet[2671]: E0421 10:20:07.415678 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.415739 kubelet[2671]: I0421 10:20:07.415720 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e073554a-e6ab-44ff-a032-f5d7862b4ec3-varrun\") pod \"csi-node-driver-dpxdk\" (UID: \"e073554a-e6ab-44ff-a032-f5d7862b4ec3\") " pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:07.416040 kubelet[2671]: E0421 10:20:07.416017 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.416040 kubelet[2671]: W0421 10:20:07.416025 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.416040 kubelet[2671]: E0421 10:20:07.416033 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.416254 kubelet[2671]: E0421 10:20:07.416232 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.416254 kubelet[2671]: W0421 10:20:07.416252 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.416972 kubelet[2671]: E0421 10:20:07.416265 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.416972 kubelet[2671]: I0421 10:20:07.416287 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e073554a-e6ab-44ff-a032-f5d7862b4ec3-kubelet-dir\") pod \"csi-node-driver-dpxdk\" (UID: \"e073554a-e6ab-44ff-a032-f5d7862b4ec3\") " pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:07.416972 kubelet[2671]: E0421 10:20:07.416492 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.416972 kubelet[2671]: W0421 10:20:07.416502 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.416972 kubelet[2671]: E0421 10:20:07.416512 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.416972 kubelet[2671]: I0421 10:20:07.416527 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e073554a-e6ab-44ff-a032-f5d7862b4ec3-socket-dir\") pod \"csi-node-driver-dpxdk\" (UID: \"e073554a-e6ab-44ff-a032-f5d7862b4ec3\") " pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:07.416972 kubelet[2671]: E0421 10:20:07.416816 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.416972 kubelet[2671]: W0421 10:20:07.416823 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.416972 kubelet[2671]: E0421 10:20:07.416829 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.417297 kubelet[2671]: I0421 10:20:07.416842 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9h6\" (UniqueName: \"kubernetes.io/projected/e073554a-e6ab-44ff-a032-f5d7862b4ec3-kube-api-access-fv9h6\") pod \"csi-node-driver-dpxdk\" (UID: \"e073554a-e6ab-44ff-a032-f5d7862b4ec3\") " pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:07.417297 kubelet[2671]: E0421 10:20:07.417060 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.417297 kubelet[2671]: W0421 10:20:07.417070 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.417297 kubelet[2671]: E0421 10:20:07.417080 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.417297 kubelet[2671]: I0421 10:20:07.417139 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e073554a-e6ab-44ff-a032-f5d7862b4ec3-registration-dir\") pod \"csi-node-driver-dpxdk\" (UID: \"e073554a-e6ab-44ff-a032-f5d7862b4ec3\") " pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:07.417297 kubelet[2671]: E0421 10:20:07.417286 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.417297 kubelet[2671]: W0421 10:20:07.417291 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.417297 kubelet[2671]: E0421 10:20:07.417298 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.417553 kubelet[2671]: E0421 10:20:07.417504 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.417594 kubelet[2671]: W0421 10:20:07.417556 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.417594 kubelet[2671]: E0421 10:20:07.417563 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.417714 kubelet[2671]: E0421 10:20:07.417702 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.417714 kubelet[2671]: W0421 10:20:07.417708 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.417753 kubelet[2671]: E0421 10:20:07.417715 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.418020 kubelet[2671]: E0421 10:20:07.417923 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.418020 kubelet[2671]: W0421 10:20:07.417932 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.418020 kubelet[2671]: E0421 10:20:07.417940 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.418158 kubelet[2671]: E0421 10:20:07.418152 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.418286 kubelet[2671]: W0421 10:20:07.418265 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.418286 kubelet[2671]: E0421 10:20:07.418287 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.418454 kubelet[2671]: E0421 10:20:07.418441 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.418454 kubelet[2671]: W0421 10:20:07.418453 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.418492 kubelet[2671]: E0421 10:20:07.418459 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.418635 kubelet[2671]: E0421 10:20:07.418621 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.418635 kubelet[2671]: W0421 10:20:07.418632 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.418695 kubelet[2671]: E0421 10:20:07.418637 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.418836 kubelet[2671]: E0421 10:20:07.418804 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.418836 kubelet[2671]: W0421 10:20:07.418820 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.418836 kubelet[2671]: E0421 10:20:07.418827 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.419015 kubelet[2671]: E0421 10:20:07.419002 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.419015 kubelet[2671]: W0421 10:20:07.419014 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.419055 kubelet[2671]: E0421 10:20:07.419019 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.419230 kubelet[2671]: E0421 10:20:07.419214 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.419250 kubelet[2671]: W0421 10:20:07.419229 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.419250 kubelet[2671]: E0421 10:20:07.419238 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.419427 kubelet[2671]: E0421 10:20:07.419413 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.419427 kubelet[2671]: W0421 10:20:07.419425 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.419465 kubelet[2671]: E0421 10:20:07.419431 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.419625 kubelet[2671]: E0421 10:20:07.419611 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.419625 kubelet[2671]: W0421 10:20:07.419623 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.419679 kubelet[2671]: E0421 10:20:07.419629 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.420040 kubelet[2671]: E0421 10:20:07.420021 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.420040 kubelet[2671]: W0421 10:20:07.420039 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.420078 kubelet[2671]: E0421 10:20:07.420047 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.420837 kubelet[2671]: E0421 10:20:07.420735 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.420837 kubelet[2671]: W0421 10:20:07.420751 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.420837 kubelet[2671]: E0421 10:20:07.420769 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.421164 kubelet[2671]: E0421 10:20:07.421107 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.421164 kubelet[2671]: W0421 10:20:07.421156 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.421205 kubelet[2671]: E0421 10:20:07.421168 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.421468 kubelet[2671]: E0421 10:20:07.421454 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.421513 kubelet[2671]: W0421 10:20:07.421469 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.421513 kubelet[2671]: E0421 10:20:07.421477 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.421736 kubelet[2671]: E0421 10:20:07.421717 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.421736 kubelet[2671]: W0421 10:20:07.421735 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.421798 kubelet[2671]: E0421 10:20:07.421742 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.422029 kubelet[2671]: E0421 10:20:07.421978 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.422029 kubelet[2671]: W0421 10:20:07.421993 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.422029 kubelet[2671]: E0421 10:20:07.421999 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.422175 kubelet[2671]: E0421 10:20:07.422160 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.422175 kubelet[2671]: W0421 10:20:07.422166 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.422175 kubelet[2671]: E0421 10:20:07.422172 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.431369 kubelet[2671]: E0421 10:20:07.431350 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.431369 kubelet[2671]: W0421 10:20:07.431368 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.431456 kubelet[2671]: E0421 10:20:07.431379 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.473198 kubelet[2671]: E0421 10:20:07.471176 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:07.473285 containerd[1573]: time="2026-04-21T10:20:07.471885342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d6fc5986-xcldz,Uid:a279dd50-1853-4439-a978-8dfb341a7ca9,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:07.494642 containerd[1573]: time="2026-04-21T10:20:07.494092112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:07.494642 containerd[1573]: time="2026-04-21T10:20:07.494590119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:07.494642 containerd[1573]: time="2026-04-21T10:20:07.494623212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:07.494962 containerd[1573]: time="2026-04-21T10:20:07.494868696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:07.508814 containerd[1573]: time="2026-04-21T10:20:07.508779165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68tnv,Uid:9564db94-38fa-47dd-a968-4898c33ecc0e,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:07.522253 kubelet[2671]: E0421 10:20:07.522190 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.522253 kubelet[2671]: W0421 10:20:07.522204 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.522253 kubelet[2671]: E0421 10:20:07.522221 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.522435 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.523039 kubelet[2671]: W0421 10:20:07.522441 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.522448 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.522629 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.523039 kubelet[2671]: W0421 10:20:07.522634 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.522640 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.522826 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.523039 kubelet[2671]: W0421 10:20:07.522831 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.522836 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.523039 kubelet[2671]: E0421 10:20:07.523035 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.523370 kubelet[2671]: W0421 10:20:07.523040 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.523370 kubelet[2671]: E0421 10:20:07.523045 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.523370 kubelet[2671]: E0421 10:20:07.523237 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.523370 kubelet[2671]: W0421 10:20:07.523243 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.523370 kubelet[2671]: E0421 10:20:07.523248 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.523449 kubelet[2671]: E0421 10:20:07.523433 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.523563 kubelet[2671]: W0421 10:20:07.523443 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.523563 kubelet[2671]: E0421 10:20:07.523489 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.524352 kubelet[2671]: E0421 10:20:07.524323 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.524352 kubelet[2671]: W0421 10:20:07.524341 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.524352 kubelet[2671]: E0421 10:20:07.524348 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.524745 kubelet[2671]: E0421 10:20:07.524728 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.524745 kubelet[2671]: W0421 10:20:07.524735 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.524745 kubelet[2671]: E0421 10:20:07.524741 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.525062 kubelet[2671]: E0421 10:20:07.525041 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.525062 kubelet[2671]: W0421 10:20:07.525061 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.525151 kubelet[2671]: E0421 10:20:07.525068 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.525577 kubelet[2671]: E0421 10:20:07.525528 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.525577 kubelet[2671]: W0421 10:20:07.525548 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.525577 kubelet[2671]: E0421 10:20:07.525557 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.525804 kubelet[2671]: E0421 10:20:07.525782 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.525804 kubelet[2671]: W0421 10:20:07.525799 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.525862 kubelet[2671]: E0421 10:20:07.525807 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.526044 kubelet[2671]: E0421 10:20:07.526020 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.526044 kubelet[2671]: W0421 10:20:07.526034 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.526044 kubelet[2671]: E0421 10:20:07.526040 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.526220 kubelet[2671]: E0421 10:20:07.526176 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.526220 kubelet[2671]: W0421 10:20:07.526182 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.526220 kubelet[2671]: E0421 10:20:07.526187 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.526368 kubelet[2671]: E0421 10:20:07.526334 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.526368 kubelet[2671]: W0421 10:20:07.526346 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.526368 kubelet[2671]: E0421 10:20:07.526352 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.526527 kubelet[2671]: E0421 10:20:07.526478 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.526527 kubelet[2671]: W0421 10:20:07.526491 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.526527 kubelet[2671]: E0421 10:20:07.526496 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.526704 kubelet[2671]: E0421 10:20:07.526672 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.526704 kubelet[2671]: W0421 10:20:07.526687 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.526704 kubelet[2671]: E0421 10:20:07.526693 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.526829 kubelet[2671]: E0421 10:20:07.526807 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.526829 kubelet[2671]: W0421 10:20:07.526823 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.526889 kubelet[2671]: E0421 10:20:07.526871 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.528036 kubelet[2671]: E0421 10:20:07.527994 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.528036 kubelet[2671]: W0421 10:20:07.528015 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.528036 kubelet[2671]: E0421 10:20:07.528023 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.528353 kubelet[2671]: E0421 10:20:07.528172 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.528353 kubelet[2671]: W0421 10:20:07.528179 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.528353 kubelet[2671]: E0421 10:20:07.528187 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.528516 kubelet[2671]: E0421 10:20:07.528490 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.528516 kubelet[2671]: W0421 10:20:07.528513 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.528553 kubelet[2671]: E0421 10:20:07.528522 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.529764 kubelet[2671]: E0421 10:20:07.529731 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.529764 kubelet[2671]: W0421 10:20:07.529751 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.529764 kubelet[2671]: E0421 10:20:07.529759 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.530350 kubelet[2671]: E0421 10:20:07.530313 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.530350 kubelet[2671]: W0421 10:20:07.530336 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.530350 kubelet[2671]: E0421 10:20:07.530345 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.530974 kubelet[2671]: E0421 10:20:07.530948 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.530974 kubelet[2671]: W0421 10:20:07.530969 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.531036 kubelet[2671]: E0421 10:20:07.530977 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.531544 kubelet[2671]: E0421 10:20:07.531502 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.533927 kubelet[2671]: W0421 10:20:07.531614 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.533927 kubelet[2671]: E0421 10:20:07.531627 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.535043 kubelet[2671]: E0421 10:20:07.535017 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:07.535043 kubelet[2671]: W0421 10:20:07.535028 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:07.535043 kubelet[2671]: E0421 10:20:07.535038 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:07.538123 containerd[1573]: time="2026-04-21T10:20:07.536007509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:07.538123 containerd[1573]: time="2026-04-21T10:20:07.536055714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:07.538123 containerd[1573]: time="2026-04-21T10:20:07.536063914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:07.538123 containerd[1573]: time="2026-04-21T10:20:07.537015611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:07.552974 containerd[1573]: time="2026-04-21T10:20:07.552947326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d6fc5986-xcldz,Uid:a279dd50-1853-4439-a978-8dfb341a7ca9,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e56f403e984667ee59dac0b2591f1a383fa3ab37e59201cdde338a482c53c37\"" Apr 21 10:20:07.553803 kubelet[2671]: E0421 10:20:07.553734 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:07.554926 containerd[1573]: time="2026-04-21T10:20:07.554718430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:20:07.568612 containerd[1573]: time="2026-04-21T10:20:07.568588373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68tnv,Uid:9564db94-38fa-47dd-a968-4898c33ecc0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\"" Apr 21 10:20:09.381040 kubelet[2671]: E0421 10:20:09.380967 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:09.419729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615324836.mount: Deactivated successfully. Apr 21 10:20:10.345368 containerd[1573]: time="2026-04-21T10:20:10.345302521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:10.346331 containerd[1573]: time="2026-04-21T10:20:10.346277704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:20:10.347179 containerd[1573]: time="2026-04-21T10:20:10.347157009Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:10.350940 containerd[1573]: time="2026-04-21T10:20:10.350289341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:10.351437 containerd[1573]: time="2026-04-21T10:20:10.351377087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.796633211s" Apr 21 10:20:10.351484 containerd[1573]: time="2026-04-21T10:20:10.351439597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:20:10.353014 containerd[1573]: time="2026-04-21T10:20:10.352997603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:20:10.370733 containerd[1573]: time="2026-04-21T10:20:10.370687505Z" level=info msg="CreateContainer within sandbox \"8e56f403e984667ee59dac0b2591f1a383fa3ab37e59201cdde338a482c53c37\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:20:10.385460 containerd[1573]: time="2026-04-21T10:20:10.385400015Z" level=info msg="CreateContainer within sandbox \"8e56f403e984667ee59dac0b2591f1a383fa3ab37e59201cdde338a482c53c37\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fdd9fd457dc39abcb198e6dbd5dd000d71d582f61b06b77c3357d05040e7a754\"" Apr 21 10:20:10.385833 containerd[1573]: time="2026-04-21T10:20:10.385798754Z" level=info msg="StartContainer for \"fdd9fd457dc39abcb198e6dbd5dd000d71d582f61b06b77c3357d05040e7a754\"" Apr 21 10:20:10.449298 containerd[1573]: time="2026-04-21T10:20:10.449220932Z" level=info msg="StartContainer for \"fdd9fd457dc39abcb198e6dbd5dd000d71d582f61b06b77c3357d05040e7a754\" returns successfully" Apr 21 10:20:11.382677 kubelet[2671]: E0421 10:20:11.382528 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:11.431762 kubelet[2671]: E0421 10:20:11.431695 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:11.444077 kubelet[2671]: E0421 10:20:11.442579 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444077 kubelet[2671]: W0421 10:20:11.442596 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444077 kubelet[2671]: E0421 10:20:11.442612 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444077 kubelet[2671]: E0421 10:20:11.442858 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444077 kubelet[2671]: W0421 10:20:11.442865 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444077 kubelet[2671]: E0421 10:20:11.442873 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444077 kubelet[2671]: I0421 10:20:11.442995 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59d6fc5986-xcldz" podStartSLOduration=1.644687433 podStartE2EDuration="4.44298445s" podCreationTimestamp="2026-04-21 10:20:07 +0000 UTC" firstStartedPulling="2026-04-21 10:20:07.55451529 +0000 UTC m=+15.274970140" lastFinishedPulling="2026-04-21 10:20:10.352812294 +0000 UTC m=+18.073267157" observedRunningTime="2026-04-21 10:20:11.442714998 +0000 UTC m=+19.163169858" watchObservedRunningTime="2026-04-21 10:20:11.44298445 +0000 UTC m=+19.163439311" Apr 21 10:20:11.444077 kubelet[2671]: E0421 10:20:11.443021 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444387 kubelet[2671]: W0421 10:20:11.443026 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444387 kubelet[2671]: E0421 10:20:11.443032 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444387 kubelet[2671]: E0421 10:20:11.443147 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444387 kubelet[2671]: W0421 10:20:11.443151 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444387 kubelet[2671]: E0421 10:20:11.443156 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444387 kubelet[2671]: E0421 10:20:11.443251 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444387 kubelet[2671]: W0421 10:20:11.443255 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444387 kubelet[2671]: E0421 10:20:11.443260 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444387 kubelet[2671]: E0421 10:20:11.443340 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444387 kubelet[2671]: W0421 10:20:11.443344 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444536 kubelet[2671]: E0421 10:20:11.443348 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444536 kubelet[2671]: E0421 10:20:11.443420 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444536 kubelet[2671]: W0421 10:20:11.443424 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444536 kubelet[2671]: E0421 10:20:11.443428 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.444536 kubelet[2671]: E0421 10:20:11.443556 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.444536 kubelet[2671]: W0421 10:20:11.443565 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.444536 kubelet[2671]: E0421 10:20:11.443575 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.445845 kubelet[2671]: E0421 10:20:11.445822 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.445845 kubelet[2671]: W0421 10:20:11.445842 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.445845 kubelet[2671]: E0421 10:20:11.445852 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.446086 kubelet[2671]: E0421 10:20:11.446068 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.446086 kubelet[2671]: W0421 10:20:11.446082 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.446086 kubelet[2671]: E0421 10:20:11.446089 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.446221 kubelet[2671]: E0421 10:20:11.446205 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.446245 kubelet[2671]: W0421 10:20:11.446223 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.446245 kubelet[2671]: E0421 10:20:11.446229 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.446372 kubelet[2671]: E0421 10:20:11.446358 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.446372 kubelet[2671]: W0421 10:20:11.446371 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.446410 kubelet[2671]: E0421 10:20:11.446377 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.446611 kubelet[2671]: E0421 10:20:11.446594 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.446660 kubelet[2671]: W0421 10:20:11.446611 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.446660 kubelet[2671]: E0421 10:20:11.446622 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.446825 kubelet[2671]: E0421 10:20:11.446810 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.446825 kubelet[2671]: W0421 10:20:11.446823 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.446860 kubelet[2671]: E0421 10:20:11.446829 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.447044 kubelet[2671]: E0421 10:20:11.447031 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.447044 kubelet[2671]: W0421 10:20:11.447043 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.447085 kubelet[2671]: E0421 10:20:11.447048 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.450299 kubelet[2671]: E0421 10:20:11.450272 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.450299 kubelet[2671]: W0421 10:20:11.450291 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.450299 kubelet[2671]: E0421 10:20:11.450299 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.450483 kubelet[2671]: E0421 10:20:11.450459 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.450483 kubelet[2671]: W0421 10:20:11.450474 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.450483 kubelet[2671]: E0421 10:20:11.450481 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.450676 kubelet[2671]: E0421 10:20:11.450662 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.450676 kubelet[2671]: W0421 10:20:11.450674 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.450712 kubelet[2671]: E0421 10:20:11.450680 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.450933 kubelet[2671]: E0421 10:20:11.450885 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.450933 kubelet[2671]: W0421 10:20:11.450929 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.450933 kubelet[2671]: E0421 10:20:11.450937 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.451160 kubelet[2671]: E0421 10:20:11.451121 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.451160 kubelet[2671]: W0421 10:20:11.451136 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.451160 kubelet[2671]: E0421 10:20:11.451143 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.451293 kubelet[2671]: E0421 10:20:11.451277 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.451293 kubelet[2671]: W0421 10:20:11.451290 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.451350 kubelet[2671]: E0421 10:20:11.451296 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.451466 kubelet[2671]: E0421 10:20:11.451453 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.451466 kubelet[2671]: W0421 10:20:11.451464 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.451499 kubelet[2671]: E0421 10:20:11.451470 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.451801 kubelet[2671]: E0421 10:20:11.451784 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.451801 kubelet[2671]: W0421 10:20:11.451800 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.451856 kubelet[2671]: E0421 10:20:11.451809 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.452025 kubelet[2671]: E0421 10:20:11.452012 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.452025 kubelet[2671]: W0421 10:20:11.452025 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.452064 kubelet[2671]: E0421 10:20:11.452030 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.452198 kubelet[2671]: E0421 10:20:11.452185 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.452198 kubelet[2671]: W0421 10:20:11.452197 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.452236 kubelet[2671]: E0421 10:20:11.452203 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.452358 kubelet[2671]: E0421 10:20:11.452346 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.452358 kubelet[2671]: W0421 10:20:11.452358 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.452395 kubelet[2671]: E0421 10:20:11.452363 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.452529 kubelet[2671]: E0421 10:20:11.452516 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.452529 kubelet[2671]: W0421 10:20:11.452528 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.452568 kubelet[2671]: E0421 10:20:11.452533 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.452742 kubelet[2671]: E0421 10:20:11.452726 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.452742 kubelet[2671]: W0421 10:20:11.452738 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.452785 kubelet[2671]: E0421 10:20:11.452744 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.453041 kubelet[2671]: E0421 10:20:11.453022 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.453041 kubelet[2671]: W0421 10:20:11.453035 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.453122 kubelet[2671]: E0421 10:20:11.453043 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.453233 kubelet[2671]: E0421 10:20:11.453215 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.453233 kubelet[2671]: W0421 10:20:11.453226 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.453233 kubelet[2671]: E0421 10:20:11.453231 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.453426 kubelet[2671]: E0421 10:20:11.453411 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.453426 kubelet[2671]: W0421 10:20:11.453422 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.453459 kubelet[2671]: E0421 10:20:11.453427 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.453694 kubelet[2671]: E0421 10:20:11.453664 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.453694 kubelet[2671]: W0421 10:20:11.453681 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.453694 kubelet[2671]: E0421 10:20:11.453688 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:11.453863 kubelet[2671]: E0421 10:20:11.453838 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:20:11.453863 kubelet[2671]: W0421 10:20:11.453853 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:20:11.453863 kubelet[2671]: E0421 10:20:11.453858 2671 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:20:12.133027 containerd[1573]: time="2026-04-21T10:20:12.132962812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:12.134021 containerd[1573]: time="2026-04-21T10:20:12.133886312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:20:12.134672 containerd[1573]: time="2026-04-21T10:20:12.134588574Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:12.136915 containerd[1573]: time="2026-04-21T10:20:12.136707642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:12.137259 containerd[1573]: time="2026-04-21T10:20:12.137210140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.784074085s" Apr 21 10:20:12.137320 containerd[1573]: time="2026-04-21T10:20:12.137274004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:20:12.140547 containerd[1573]: time="2026-04-21T10:20:12.140517157Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:20:12.159325 containerd[1573]: time="2026-04-21T10:20:12.159181728Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25\"" Apr 21 10:20:12.159931 containerd[1573]: time="2026-04-21T10:20:12.159791799Z" level=info msg="StartContainer for \"473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25\"" Apr 21 10:20:12.180562 systemd[1]: run-containerd-runc-k8s.io-473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25-runc.ewTi3X.mount: Deactivated successfully. Apr 21 10:20:12.206052 containerd[1573]: time="2026-04-21T10:20:12.206015962Z" level=info msg="StartContainer for \"473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25\" returns successfully" Apr 21 10:20:12.235801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25-rootfs.mount: Deactivated successfully. Apr 21 10:20:12.262232 containerd[1573]: time="2026-04-21T10:20:12.262150247Z" level=info msg="shim disconnected" id=473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25 namespace=k8s.io Apr 21 10:20:12.262232 containerd[1573]: time="2026-04-21T10:20:12.262222634Z" level=warning msg="cleaning up after shim disconnected" id=473324b12af2e1ee150fe17db76f2797e2124d5cb1db4c96b16b148ac414ff25 namespace=k8s.io Apr 21 10:20:12.262232 containerd[1573]: time="2026-04-21T10:20:12.262232680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:20:12.433584 kubelet[2671]: I0421 10:20:12.433459 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:20:12.434049 kubelet[2671]: E0421 10:20:12.433829 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:12.434529 containerd[1573]: time="2026-04-21T10:20:12.434487572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:20:12.669062 update_engine[1563]: I20260421 10:20:12.668967 1563 update_attempter.cc:509] Updating boot flags... Apr 21 10:20:12.694967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3410) Apr 21 10:20:12.721982 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3412) Apr 21 10:20:12.751866 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3412) Apr 21 10:20:13.381501 kubelet[2671]: E0421 10:20:13.381378 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:15.390123 kubelet[2671]: E0421 10:20:15.389763 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:17.382995 kubelet[2671]: E0421 10:20:17.380640 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:18.934376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203488991.mount: Deactivated successfully. Apr 21 10:20:19.107272 containerd[1573]: time="2026-04-21T10:20:19.107156576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:19.107945 containerd[1573]: time="2026-04-21T10:20:19.107870074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:20:19.111201 containerd[1573]: time="2026-04-21T10:20:19.109476866Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:19.112388 containerd[1573]: time="2026-04-21T10:20:19.112357771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:19.113084 containerd[1573]: time="2026-04-21T10:20:19.113040037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.678510153s" Apr 21 10:20:19.113115 containerd[1573]: time="2026-04-21T10:20:19.113083900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:20:19.118652 containerd[1573]: time="2026-04-21T10:20:19.118561325Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:20:19.147839 containerd[1573]: time="2026-04-21T10:20:19.147758862Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"8a7d978da3d4cb79b611990aaed7af8fcb6feccb185ac4d91eb191db962abd67\"" Apr 21 10:20:19.149301 containerd[1573]: time="2026-04-21T10:20:19.148222836Z" level=info msg="StartContainer for \"8a7d978da3d4cb79b611990aaed7af8fcb6feccb185ac4d91eb191db962abd67\"" Apr 21 10:20:19.252807 containerd[1573]: time="2026-04-21T10:20:19.252602083Z" level=info msg="StartContainer for \"8a7d978da3d4cb79b611990aaed7af8fcb6feccb185ac4d91eb191db962abd67\" returns successfully" Apr 21 10:20:19.308094 containerd[1573]: time="2026-04-21T10:20:19.307992173Z" level=info msg="shim disconnected" id=8a7d978da3d4cb79b611990aaed7af8fcb6feccb185ac4d91eb191db962abd67 namespace=k8s.io Apr 21 10:20:19.308094 containerd[1573]: time="2026-04-21T10:20:19.308060310Z" level=warning msg="cleaning up after shim disconnected" id=8a7d978da3d4cb79b611990aaed7af8fcb6feccb185ac4d91eb191db962abd67 namespace=k8s.io Apr 21 10:20:19.308094 containerd[1573]: time="2026-04-21T10:20:19.308068166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:20:19.382168 kubelet[2671]: E0421 10:20:19.381814 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:19.462651 containerd[1573]: time="2026-04-21T10:20:19.462606908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:20:19.935261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a7d978da3d4cb79b611990aaed7af8fcb6feccb185ac4d91eb191db962abd67-rootfs.mount: Deactivated successfully. Apr 21 10:20:21.380773 kubelet[2671]: E0421 10:20:21.380608 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:23.381311 kubelet[2671]: E0421 10:20:23.381224 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:25.381257 kubelet[2671]: E0421 10:20:25.381167 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:26.253263 containerd[1573]: time="2026-04-21T10:20:26.253188928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:26.254158 containerd[1573]: time="2026-04-21T10:20:26.254096808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:20:26.255224 containerd[1573]: time="2026-04-21T10:20:26.255180116Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:26.257060 containerd[1573]: time="2026-04-21T10:20:26.257024692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:26.257503 containerd[1573]: time="2026-04-21T10:20:26.257468906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 6.794835899s" Apr 21 10:20:26.257624 containerd[1573]: time="2026-04-21T10:20:26.257505463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:20:26.261529 containerd[1573]: time="2026-04-21T10:20:26.261497523Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:20:26.278039 containerd[1573]: time="2026-04-21T10:20:26.278004398Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79\"" Apr 21 10:20:26.278602 containerd[1573]: time="2026-04-21T10:20:26.278571575Z" level=info msg="StartContainer for \"433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79\"" Apr 21 10:20:26.316747 systemd[1]: run-containerd-runc-k8s.io-433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79-runc.u8lRRT.mount: Deactivated successfully. Apr 21 10:20:26.369022 containerd[1573]: time="2026-04-21T10:20:26.368973721Z" level=info msg="StartContainer for \"433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79\" returns successfully" Apr 21 10:20:26.802812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79-rootfs.mount: Deactivated successfully. Apr 21 10:20:26.807491 containerd[1573]: time="2026-04-21T10:20:26.807388024Z" level=info msg="shim disconnected" id=433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79 namespace=k8s.io Apr 21 10:20:26.807491 containerd[1573]: time="2026-04-21T10:20:26.807449875Z" level=warning msg="cleaning up after shim disconnected" id=433b0d52ff2e04015c20c8f7d6083e733676ee885f1903486dd1c510dbebcc79 namespace=k8s.io Apr 21 10:20:26.807491 containerd[1573]: time="2026-04-21T10:20:26.807457013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:20:26.834189 kubelet[2671]: I0421 10:20:26.834123 2671 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:20:26.905567 kubelet[2671]: I0421 10:20:26.905251 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrqx\" (UniqueName: \"kubernetes.io/projected/bf7ddc57-ba5f-4002-92d4-b664fca67867-kube-api-access-lmrqx\") pod \"coredns-674b8bbfcf-vtmzf\" (UID: \"bf7ddc57-ba5f-4002-92d4-b664fca67867\") " pod="kube-system/coredns-674b8bbfcf-vtmzf" Apr 21 10:20:26.909002 kubelet[2671]: I0421 10:20:26.906442 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf5f2a04-0539-42fa-a71e-30dc3c2207a4-tigera-ca-bundle\") pod \"calico-kube-controllers-6884ccd5b8-w5mwb\" (UID: \"bf5f2a04-0539-42fa-a71e-30dc3c2207a4\") " pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" Apr 21 10:20:26.909002 kubelet[2671]: I0421 10:20:26.906465 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzbh\" (UniqueName: \"kubernetes.io/projected/bf5f2a04-0539-42fa-a71e-30dc3c2207a4-kube-api-access-9xzbh\") pod \"calico-kube-controllers-6884ccd5b8-w5mwb\" (UID: \"bf5f2a04-0539-42fa-a71e-30dc3c2207a4\") " pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" Apr 21 10:20:26.909002 kubelet[2671]: I0421 10:20:26.906485 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/520d03aa-bd03-4c5c-9f6e-49911f08321d-goldmane-key-pair\") pod \"goldmane-5b85766d88-l8wx5\" (UID: \"520d03aa-bd03-4c5c-9f6e-49911f08321d\") " pod="calico-system/goldmane-5b85766d88-l8wx5" Apr 21 10:20:26.909002 kubelet[2671]: I0421 10:20:26.906498 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-ca-bundle\") pod \"whisker-66c5849fb6-ghkc9\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " pod="calico-system/whisker-66c5849fb6-ghkc9" Apr 21 10:20:26.909002 kubelet[2671]: I0421 10:20:26.906519 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578qj\" (UniqueName: \"kubernetes.io/projected/39d404b5-22a7-43b4-8197-a96841dc8873-kube-api-access-578qj\") pod \"whisker-66c5849fb6-ghkc9\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " pod="calico-system/whisker-66c5849fb6-ghkc9" Apr 21 10:20:26.909166 kubelet[2671]: I0421 10:20:26.906665 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/520d03aa-bd03-4c5c-9f6e-49911f08321d-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-l8wx5\" (UID: \"520d03aa-bd03-4c5c-9f6e-49911f08321d\") " pod="calico-system/goldmane-5b85766d88-l8wx5" Apr 21 10:20:26.909166 kubelet[2671]: I0421 10:20:26.906738 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/520d03aa-bd03-4c5c-9f6e-49911f08321d-config\") pod \"goldmane-5b85766d88-l8wx5\" (UID: \"520d03aa-bd03-4c5c-9f6e-49911f08321d\") " pod="calico-system/goldmane-5b85766d88-l8wx5" Apr 21 10:20:26.909166 kubelet[2671]: I0421 10:20:26.906753 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-nginx-config\") pod \"whisker-66c5849fb6-ghkc9\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " pod="calico-system/whisker-66c5849fb6-ghkc9" Apr 21 10:20:26.909166 kubelet[2671]: I0421 10:20:26.906766 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpqv\" (UniqueName: \"kubernetes.io/projected/520d03aa-bd03-4c5c-9f6e-49911f08321d-kube-api-access-shpqv\") pod \"goldmane-5b85766d88-l8wx5\" (UID: \"520d03aa-bd03-4c5c-9f6e-49911f08321d\") " pod="calico-system/goldmane-5b85766d88-l8wx5" Apr 21 10:20:26.909166 kubelet[2671]: I0421 10:20:26.906783 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7ddc57-ba5f-4002-92d4-b664fca67867-config-volume\") pod \"coredns-674b8bbfcf-vtmzf\" (UID: \"bf7ddc57-ba5f-4002-92d4-b664fca67867\") " pod="kube-system/coredns-674b8bbfcf-vtmzf" Apr 21 10:20:26.909269 kubelet[2671]: I0421 10:20:26.906831 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-backend-key-pair\") pod \"whisker-66c5849fb6-ghkc9\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " pod="calico-system/whisker-66c5849fb6-ghkc9" Apr 21 10:20:27.007540 kubelet[2671]: I0421 10:20:27.007493 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3033d5e-8f86-4407-9e8b-329c5d9f5e56-calico-apiserver-certs\") pod \"calico-apiserver-57b589b74f-ktxzm\" (UID: \"a3033d5e-8f86-4407-9e8b-329c5d9f5e56\") " pod="calico-system/calico-apiserver-57b589b74f-ktxzm" Apr 21 10:20:27.007540 kubelet[2671]: I0421 10:20:27.007536 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87-config-volume\") pod \"coredns-674b8bbfcf-qrn9t\" (UID: \"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87\") " pod="kube-system/coredns-674b8bbfcf-qrn9t" Apr 21 10:20:27.007888 kubelet[2671]: I0421 10:20:27.007761 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7sg\" (UniqueName: \"kubernetes.io/projected/301078df-91a6-4fff-b66d-08ba5d84899e-kube-api-access-md7sg\") pod \"calico-apiserver-57b589b74f-9ncj6\" (UID: \"301078df-91a6-4fff-b66d-08ba5d84899e\") " pod="calico-system/calico-apiserver-57b589b74f-9ncj6" Apr 21 10:20:27.007888 kubelet[2671]: I0421 10:20:27.007882 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/301078df-91a6-4fff-b66d-08ba5d84899e-calico-apiserver-certs\") pod \"calico-apiserver-57b589b74f-9ncj6\" (UID: \"301078df-91a6-4fff-b66d-08ba5d84899e\") " pod="calico-system/calico-apiserver-57b589b74f-9ncj6" Apr 21 10:20:27.007971 kubelet[2671]: I0421 10:20:27.007929 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfm7j\" (UniqueName: \"kubernetes.io/projected/a3033d5e-8f86-4407-9e8b-329c5d9f5e56-kube-api-access-dfm7j\") pod \"calico-apiserver-57b589b74f-ktxzm\" (UID: \"a3033d5e-8f86-4407-9e8b-329c5d9f5e56\") " pod="calico-system/calico-apiserver-57b589b74f-ktxzm" Apr 21 10:20:27.008389 kubelet[2671]: I0421 10:20:27.007996 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czxr\" (UniqueName: \"kubernetes.io/projected/de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87-kube-api-access-4czxr\") pod \"coredns-674b8bbfcf-qrn9t\" (UID: \"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87\") " pod="kube-system/coredns-674b8bbfcf-qrn9t" Apr 21 10:20:27.171486 containerd[1573]: time="2026-04-21T10:20:27.171216863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884ccd5b8-w5mwb,Uid:bf5f2a04-0539-42fa-a71e-30dc3c2207a4,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:27.174968 kubelet[2671]: E0421 10:20:27.173996 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:27.175255 containerd[1573]: time="2026-04-21T10:20:27.174307209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtmzf,Uid:bf7ddc57-ba5f-4002-92d4-b664fca67867,Namespace:kube-system,Attempt:0,}" Apr 21 10:20:27.181414 containerd[1573]: time="2026-04-21T10:20:27.181354169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l8wx5,Uid:520d03aa-bd03-4c5c-9f6e-49911f08321d,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:27.187374 containerd[1573]: time="2026-04-21T10:20:27.187318978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-ktxzm,Uid:a3033d5e-8f86-4407-9e8b-329c5d9f5e56,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:27.188972 containerd[1573]: time="2026-04-21T10:20:27.188944924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c5849fb6-ghkc9,Uid:39d404b5-22a7-43b4-8197-a96841dc8873,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:27.209152 kubelet[2671]: E0421 10:20:27.209113 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:27.219715 containerd[1573]: time="2026-04-21T10:20:27.219163185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-9ncj6,Uid:301078df-91a6-4fff-b66d-08ba5d84899e,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:27.219715 containerd[1573]: time="2026-04-21T10:20:27.219643857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrn9t,Uid:de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87,Namespace:kube-system,Attempt:0,}" Apr 21 10:20:27.366089 containerd[1573]: time="2026-04-21T10:20:27.366037706Z" level=error msg="Failed to destroy network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.368411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9-shm.mount: Deactivated successfully. Apr 21 10:20:27.368633 containerd[1573]: time="2026-04-21T10:20:27.368481389Z" level=error msg="encountered an error cleaning up failed sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.373926 containerd[1573]: time="2026-04-21T10:20:27.373705478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-ktxzm,Uid:a3033d5e-8f86-4407-9e8b-329c5d9f5e56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.386397 kubelet[2671]: E0421 10:20:27.386325 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.386535 kubelet[2671]: E0421 10:20:27.386420 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57b589b74f-ktxzm" Apr 21 10:20:27.386535 kubelet[2671]: E0421 10:20:27.386445 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57b589b74f-ktxzm" Apr 21 10:20:27.386623 kubelet[2671]: E0421 10:20:27.386569 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b589b74f-ktxzm_calico-system(a3033d5e-8f86-4407-9e8b-329c5d9f5e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b589b74f-ktxzm_calico-system(a3033d5e-8f86-4407-9e8b-329c5d9f5e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57b589b74f-ktxzm" podUID="a3033d5e-8f86-4407-9e8b-329c5d9f5e56" Apr 21 10:20:27.389532 containerd[1573]: time="2026-04-21T10:20:27.389450453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpxdk,Uid:e073554a-e6ab-44ff-a032-f5d7862b4ec3,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:27.392351 containerd[1573]: time="2026-04-21T10:20:27.392273032Z" level=error msg="Failed to destroy network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.395061 containerd[1573]: time="2026-04-21T10:20:27.394993077Z" level=error msg="encountered an error cleaning up failed sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.397030 containerd[1573]: time="2026-04-21T10:20:27.395180716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c5849fb6-ghkc9,Uid:39d404b5-22a7-43b4-8197-a96841dc8873,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.396519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4-shm.mount: Deactivated successfully. Apr 21 10:20:27.400370 kubelet[2671]: E0421 10:20:27.400132 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.400370 kubelet[2671]: E0421 10:20:27.400195 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66c5849fb6-ghkc9" Apr 21 10:20:27.400370 kubelet[2671]: E0421 10:20:27.400211 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66c5849fb6-ghkc9" Apr 21 10:20:27.400565 kubelet[2671]: E0421 10:20:27.400290 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66c5849fb6-ghkc9_calico-system(39d404b5-22a7-43b4-8197-a96841dc8873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66c5849fb6-ghkc9_calico-system(39d404b5-22a7-43b4-8197-a96841dc8873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66c5849fb6-ghkc9" podUID="39d404b5-22a7-43b4-8197-a96841dc8873" Apr 21 10:20:27.423722 containerd[1573]: time="2026-04-21T10:20:27.423543765Z" level=error msg="Failed to destroy network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.424790 containerd[1573]: time="2026-04-21T10:20:27.424740948Z" level=error msg="encountered an error cleaning up failed sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.424990 containerd[1573]: time="2026-04-21T10:20:27.424836639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884ccd5b8-w5mwb,Uid:bf5f2a04-0539-42fa-a71e-30dc3c2207a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.425151 kubelet[2671]: E0421 10:20:27.425102 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.425243 kubelet[2671]: E0421 10:20:27.425166 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" Apr 21 10:20:27.425243 kubelet[2671]: E0421 10:20:27.425184 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" Apr 21 10:20:27.425280 kubelet[2671]: E0421 10:20:27.425240 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6884ccd5b8-w5mwb_calico-system(bf5f2a04-0539-42fa-a71e-30dc3c2207a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6884ccd5b8-w5mwb_calico-system(bf5f2a04-0539-42fa-a71e-30dc3c2207a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" podUID="bf5f2a04-0539-42fa-a71e-30dc3c2207a4" Apr 21 10:20:27.427493 containerd[1573]: time="2026-04-21T10:20:27.427288693Z" level=error msg="Failed to destroy network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.427818 containerd[1573]: time="2026-04-21T10:20:27.427800512Z" level=error msg="encountered an error cleaning up failed sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.427948 containerd[1573]: time="2026-04-21T10:20:27.427877374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l8wx5,Uid:520d03aa-bd03-4c5c-9f6e-49911f08321d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.428361 kubelet[2671]: E0421 10:20:27.428204 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.428361 kubelet[2671]: E0421 10:20:27.428247 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-l8wx5" Apr 21 10:20:27.428361 kubelet[2671]: E0421 10:20:27.428262 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-l8wx5" Apr 21 10:20:27.428473 kubelet[2671]: E0421 10:20:27.428303 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-l8wx5_calico-system(520d03aa-bd03-4c5c-9f6e-49911f08321d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-l8wx5_calico-system(520d03aa-bd03-4c5c-9f6e-49911f08321d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-l8wx5" podUID="520d03aa-bd03-4c5c-9f6e-49911f08321d" Apr 21 10:20:27.428601 containerd[1573]: time="2026-04-21T10:20:27.428584716Z" level=error msg="Failed to destroy network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.428946 containerd[1573]: time="2026-04-21T10:20:27.428929500Z" level=error msg="encountered an error cleaning up failed sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.429017 containerd[1573]: time="2026-04-21T10:20:27.429004970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtmzf,Uid:bf7ddc57-ba5f-4002-92d4-b664fca67867,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.429309 kubelet[2671]: E0421 10:20:27.429152 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.429309 kubelet[2671]: E0421 10:20:27.429212 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtmzf" Apr 21 10:20:27.429309 kubelet[2671]: E0421 10:20:27.429227 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtmzf" Apr 21 10:20:27.429489 kubelet[2671]: E0421 10:20:27.429261 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtmzf_kube-system(bf7ddc57-ba5f-4002-92d4-b664fca67867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtmzf_kube-system(bf7ddc57-ba5f-4002-92d4-b664fca67867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtmzf" podUID="bf7ddc57-ba5f-4002-92d4-b664fca67867" Apr 21 10:20:27.443242 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:48884.service - OpenSSH per-connection server daemon (10.0.0.1:48884). Apr 21 10:20:27.456079 containerd[1573]: time="2026-04-21T10:20:27.456027703Z" level=error msg="Failed to destroy network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.456666 containerd[1573]: time="2026-04-21T10:20:27.456580864Z" level=error msg="encountered an error cleaning up failed sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.456666 containerd[1573]: time="2026-04-21T10:20:27.456629222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-9ncj6,Uid:301078df-91a6-4fff-b66d-08ba5d84899e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.457092 kubelet[2671]: E0421 10:20:27.457034 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.457144 kubelet[2671]: E0421 10:20:27.457105 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57b589b74f-9ncj6" Apr 21 10:20:27.457144 kubelet[2671]: E0421 10:20:27.457121 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57b589b74f-9ncj6" Apr 21 10:20:27.457182 kubelet[2671]: E0421 10:20:27.457164 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b589b74f-9ncj6_calico-system(301078df-91a6-4fff-b66d-08ba5d84899e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b589b74f-9ncj6_calico-system(301078df-91a6-4fff-b66d-08ba5d84899e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57b589b74f-9ncj6" podUID="301078df-91a6-4fff-b66d-08ba5d84899e" Apr 21 10:20:27.465801 containerd[1573]: time="2026-04-21T10:20:27.465766094Z" level=error msg="Failed to destroy network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.466311 containerd[1573]: time="2026-04-21T10:20:27.466179244Z" level=error msg="encountered an error cleaning up failed sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.466311 containerd[1573]: time="2026-04-21T10:20:27.466221269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrn9t,Uid:de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.466804 kubelet[2671]: E0421 10:20:27.466403 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.466804 kubelet[2671]: E0421 10:20:27.466460 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qrn9t" Apr 21 10:20:27.466804 kubelet[2671]: E0421 10:20:27.466482 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qrn9t" Apr 21 10:20:27.467424 kubelet[2671]: E0421 10:20:27.466520 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qrn9t_kube-system(de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qrn9t_kube-system(de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qrn9t" podUID="de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87" Apr 21 10:20:27.478396 kubelet[2671]: I0421 10:20:27.478374 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:27.481339 kubelet[2671]: I0421 10:20:27.481276 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:27.482950 sshd[3807]: Accepted publickey for core from 10.0.0.1 port 48884 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:27.484091 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:27.486090 kubelet[2671]: I0421 10:20:27.486050 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:27.489450 systemd-logind[1561]: New session 8 of user core. Apr 21 10:20:27.494168 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:20:27.496747 kubelet[2671]: I0421 10:20:27.496728 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:27.508890 containerd[1573]: time="2026-04-21T10:20:27.508826315Z" level=info msg="StopPodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\"" Apr 21 10:20:27.509404 containerd[1573]: time="2026-04-21T10:20:27.509353852Z" level=info msg="StopPodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\"" Apr 21 10:20:27.509756 containerd[1573]: time="2026-04-21T10:20:27.509704417Z" level=info msg="StopPodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\"" Apr 21 10:20:27.510111 containerd[1573]: time="2026-04-21T10:20:27.509986280Z" level=info msg="StopPodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\"" Apr 21 10:20:27.510111 containerd[1573]: time="2026-04-21T10:20:27.510024376Z" level=info msg="Ensure that sandbox f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b in task-service has been cleanup successfully" Apr 21 10:20:27.510383 containerd[1573]: time="2026-04-21T10:20:27.510028721Z" level=info msg="Ensure that sandbox 228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c in task-service has been cleanup successfully" Apr 21 10:20:27.514037 containerd[1573]: time="2026-04-21T10:20:27.514017343Z" level=info msg="Ensure that sandbox 9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4 in task-service has been cleanup successfully" Apr 21 10:20:27.514218 containerd[1573]: time="2026-04-21T10:20:27.514164966Z" level=info msg="Ensure that sandbox 8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9 in task-service has been cleanup successfully" Apr 21 10:20:27.521160 kubelet[2671]: I0421 10:20:27.520555 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:27.521481 containerd[1573]: time="2026-04-21T10:20:27.521440381Z" level=info msg="StopPodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\"" Apr 21 10:20:27.521646 containerd[1573]: time="2026-04-21T10:20:27.521635124Z" level=info msg="Ensure that sandbox 8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06 in task-service has been cleanup successfully" Apr 21 10:20:27.527537 containerd[1573]: time="2026-04-21T10:20:27.527329589Z" level=error msg="Failed to destroy network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.528313 containerd[1573]: time="2026-04-21T10:20:27.528167640Z" level=error msg="encountered an error cleaning up failed sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.528313 containerd[1573]: time="2026-04-21T10:20:27.528221190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpxdk,Uid:e073554a-e6ab-44ff-a032-f5d7862b4ec3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.532992 kubelet[2671]: E0421 10:20:27.532292 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.532992 kubelet[2671]: E0421 10:20:27.532464 2671 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:27.532992 kubelet[2671]: E0421 10:20:27.532480 2671 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dpxdk" Apr 21 10:20:27.533139 kubelet[2671]: E0421 10:20:27.532576 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dpxdk_calico-system(e073554a-e6ab-44ff-a032-f5d7862b4ec3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dpxdk_calico-system(e073554a-e6ab-44ff-a032-f5d7862b4ec3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dpxdk" podUID="e073554a-e6ab-44ff-a032-f5d7862b4ec3" Apr 21 10:20:27.542004 kubelet[2671]: I0421 10:20:27.541974 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:27.543966 containerd[1573]: time="2026-04-21T10:20:27.543943154Z" level=info msg="StopPodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\"" Apr 21 10:20:27.544317 containerd[1573]: time="2026-04-21T10:20:27.544301729Z" level=info msg="Ensure that sandbox b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59 in task-service has been cleanup successfully" Apr 21 10:20:27.565960 containerd[1573]: time="2026-04-21T10:20:27.565838950Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:20:27.569434 kubelet[2671]: I0421 10:20:27.569171 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:27.573162 containerd[1573]: time="2026-04-21T10:20:27.571015939Z" level=info msg="StopPodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\"" Apr 21 10:20:27.573162 containerd[1573]: time="2026-04-21T10:20:27.571149287Z" level=info msg="Ensure that sandbox 403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd in task-service has been cleanup successfully" Apr 21 10:20:27.609965 containerd[1573]: time="2026-04-21T10:20:27.609860442Z" level=info msg="CreateContainer within sandbox \"bf1823ca9caa82b9beabd809dcb2afaeacff1b2284992da3a3ca2efe27bb62d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3e28c245ae2f158f85b147ef8866c038ee8fabc1045a2a28a6227ea6616b397a\"" Apr 21 10:20:27.617568 containerd[1573]: time="2026-04-21T10:20:27.616477442Z" level=info msg="StartContainer for \"3e28c245ae2f158f85b147ef8866c038ee8fabc1045a2a28a6227ea6616b397a\"" Apr 21 10:20:27.617725 containerd[1573]: time="2026-04-21T10:20:27.617663161Z" level=error msg="StopPodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" failed" error="failed to destroy network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.617948 kubelet[2671]: E0421 10:20:27.617891 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:27.618088 kubelet[2671]: E0421 10:20:27.618062 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4"} Apr 21 10:20:27.618181 kubelet[2671]: E0421 10:20:27.618172 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39d404b5-22a7-43b4-8197-a96841dc8873\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.618307 kubelet[2671]: E0421 10:20:27.618293 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39d404b5-22a7-43b4-8197-a96841dc8873\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66c5849fb6-ghkc9" podUID="39d404b5-22a7-43b4-8197-a96841dc8873" Apr 21 10:20:27.628497 containerd[1573]: time="2026-04-21T10:20:27.628468534Z" level=error msg="StopPodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" failed" error="failed to destroy network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.628742 containerd[1573]: time="2026-04-21T10:20:27.628678677Z" level=error msg="StopPodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" failed" error="failed to destroy network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.628946 kubelet[2671]: E0421 10:20:27.628920 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:27.629063 kubelet[2671]: E0421 10:20:27.629049 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c"} Apr 21 10:20:27.629150 kubelet[2671]: E0421 10:20:27.629139 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"301078df-91a6-4fff-b66d-08ba5d84899e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.629264 kubelet[2671]: E0421 10:20:27.629248 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"301078df-91a6-4fff-b66d-08ba5d84899e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57b589b74f-9ncj6" podUID="301078df-91a6-4fff-b66d-08ba5d84899e" Apr 21 10:20:27.629485 kubelet[2671]: E0421 10:20:27.629471 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:27.629560 kubelet[2671]: E0421 10:20:27.629552 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b"} Apr 21 10:20:27.629653 kubelet[2671]: E0421 10:20:27.629639 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"520d03aa-bd03-4c5c-9f6e-49911f08321d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.629766 kubelet[2671]: E0421 10:20:27.629750 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"520d03aa-bd03-4c5c-9f6e-49911f08321d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-l8wx5" podUID="520d03aa-bd03-4c5c-9f6e-49911f08321d" Apr 21 10:20:27.630835 containerd[1573]: time="2026-04-21T10:20:27.630792114Z" level=error msg="StopPodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" failed" error="failed to destroy network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.631105 kubelet[2671]: E0421 10:20:27.631082 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:27.631202 kubelet[2671]: E0421 10:20:27.631193 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06"} Apr 21 10:20:27.631294 kubelet[2671]: E0421 10:20:27.631285 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf7ddc57-ba5f-4002-92d4-b664fca67867\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.631435 kubelet[2671]: E0421 10:20:27.631403 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf7ddc57-ba5f-4002-92d4-b664fca67867\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtmzf" podUID="bf7ddc57-ba5f-4002-92d4-b664fca67867" Apr 21 10:20:27.636509 containerd[1573]: time="2026-04-21T10:20:27.636414335Z" level=error msg="StopPodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" failed" error="failed to destroy network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.636672 kubelet[2671]: E0421 10:20:27.636624 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:27.636672 kubelet[2671]: E0421 10:20:27.636659 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9"} Apr 21 10:20:27.636774 kubelet[2671]: E0421 10:20:27.636676 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3033d5e-8f86-4407-9e8b-329c5d9f5e56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.636774 kubelet[2671]: E0421 10:20:27.636719 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3033d5e-8f86-4407-9e8b-329c5d9f5e56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57b589b74f-ktxzm" podUID="a3033d5e-8f86-4407-9e8b-329c5d9f5e56" Apr 21 10:20:27.639095 containerd[1573]: time="2026-04-21T10:20:27.639062448Z" level=error msg="StopPodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" failed" error="failed to destroy network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.639285 kubelet[2671]: E0421 10:20:27.639214 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:27.639317 kubelet[2671]: E0421 10:20:27.639292 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59"} Apr 21 10:20:27.639317 kubelet[2671]: E0421 10:20:27.639312 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.639405 kubelet[2671]: E0421 10:20:27.639327 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qrn9t" podUID="de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87" Apr 21 10:20:27.642195 containerd[1573]: time="2026-04-21T10:20:27.642083925Z" level=error msg="StopPodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" failed" error="failed to destroy network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:20:27.642285 kubelet[2671]: E0421 10:20:27.642255 2671 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:27.642356 kubelet[2671]: E0421 10:20:27.642290 2671 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd"} Apr 21 10:20:27.642356 kubelet[2671]: E0421 10:20:27.642308 2671 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf5f2a04-0539-42fa-a71e-30dc3c2207a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:20:27.642356 kubelet[2671]: E0421 10:20:27.642323 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf5f2a04-0539-42fa-a71e-30dc3c2207a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" podUID="bf5f2a04-0539-42fa-a71e-30dc3c2207a4" Apr 21 10:20:27.676313 sshd[3807]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:27.680017 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:48884.service: Deactivated successfully. Apr 21 10:20:27.681477 containerd[1573]: time="2026-04-21T10:20:27.680642650Z" level=info msg="StartContainer for \"3e28c245ae2f158f85b147ef8866c038ee8fabc1045a2a28a6227ea6616b397a\" returns successfully" Apr 21 10:20:27.682522 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:20:27.683033 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:20:27.683821 systemd-logind[1561]: Removed session 8. Apr 21 10:20:28.275736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6-shm.mount: Deactivated successfully. Apr 21 10:20:28.275860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59-shm.mount: Deactivated successfully. Apr 21 10:20:28.275959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c-shm.mount: Deactivated successfully. Apr 21 10:20:28.276025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b-shm.mount: Deactivated successfully. Apr 21 10:20:28.276088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06-shm.mount: Deactivated successfully. Apr 21 10:20:28.276150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd-shm.mount: Deactivated successfully. Apr 21 10:20:28.612120 kubelet[2671]: I0421 10:20:28.611457 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:28.614038 containerd[1573]: time="2026-04-21T10:20:28.613462054Z" level=info msg="StopPodSandbox for \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\"" Apr 21 10:20:28.614248 containerd[1573]: time="2026-04-21T10:20:28.614074382Z" level=info msg="Ensure that sandbox 8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6 in task-service has been cleanup successfully" Apr 21 10:20:28.614684 containerd[1573]: time="2026-04-21T10:20:28.614479594Z" level=info msg="StopPodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\"" Apr 21 10:20:28.692674 kubelet[2671]: I0421 10:20:28.692567 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-68tnv" podStartSLOduration=3.003909775 podStartE2EDuration="21.69254639s" podCreationTimestamp="2026-04-21 10:20:07 +0000 UTC" firstStartedPulling="2026-04-21 10:20:07.569616707 +0000 UTC m=+15.290071557" lastFinishedPulling="2026-04-21 10:20:26.258253321 +0000 UTC m=+33.978708172" observedRunningTime="2026-04-21 10:20:28.642184794 +0000 UTC m=+36.362639651" watchObservedRunningTime="2026-04-21 10:20:28.69254639 +0000 UTC m=+36.413001251" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.693 [INFO][4052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.693 [INFO][4052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" iface="eth0" netns="/var/run/netns/cni-07c1d0a8-c425-6a7d-3dcd-37a7173cf234" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.693 [INFO][4052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" iface="eth0" netns="/var/run/netns/cni-07c1d0a8-c425-6a7d-3dcd-37a7173cf234" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.694 [INFO][4052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" iface="eth0" netns="/var/run/netns/cni-07c1d0a8-c425-6a7d-3dcd-37a7173cf234" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.694 [INFO][4052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.694 [INFO][4052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.715 [INFO][4079] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.715 [INFO][4079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.715 [INFO][4079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.724 [WARNING][4079] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.724 [INFO][4079] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.726 [INFO][4079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:28.730482 containerd[1573]: 2026-04-21 10:20:28.729 [INFO][4052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:28.732732 containerd[1573]: time="2026-04-21T10:20:28.731155121Z" level=info msg="TearDown network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" successfully" Apr 21 10:20:28.732732 containerd[1573]: time="2026-04-21T10:20:28.731540772Z" level=info msg="StopPodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" returns successfully" Apr 21 10:20:28.734406 systemd[1]: run-netns-cni\x2d07c1d0a8\x2dc425\x2d6a7d\x2d3dcd\x2d37a7173cf234.mount: Deactivated successfully. Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.693 [INFO][4064] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.693 [INFO][4064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" iface="eth0" netns="/var/run/netns/cni-a26e9cbd-007a-49af-d729-e2d33988cfa8" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.693 [INFO][4064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" iface="eth0" netns="/var/run/netns/cni-a26e9cbd-007a-49af-d729-e2d33988cfa8" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.694 [INFO][4064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" iface="eth0" netns="/var/run/netns/cni-a26e9cbd-007a-49af-d729-e2d33988cfa8" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.694 [INFO][4064] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.694 [INFO][4064] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.716 [INFO][4078] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.716 [INFO][4078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.726 [INFO][4078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.733 [WARNING][4078] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.734 [INFO][4078] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.735 [INFO][4078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:28.738097 containerd[1573]: 2026-04-21 10:20:28.736 [INFO][4064] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:28.738430 containerd[1573]: time="2026-04-21T10:20:28.738245203Z" level=info msg="TearDown network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\" successfully" Apr 21 10:20:28.738430 containerd[1573]: time="2026-04-21T10:20:28.738260203Z" level=info msg="StopPodSandbox for \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\" returns successfully" Apr 21 10:20:28.738711 containerd[1573]: time="2026-04-21T10:20:28.738665559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpxdk,Uid:e073554a-e6ab-44ff-a032-f5d7862b4ec3,Namespace:calico-system,Attempt:1,}" Apr 21 10:20:28.740136 systemd[1]: run-netns-cni\x2da26e9cbd\x2d007a\x2d49af\x2dd729\x2de2d33988cfa8.mount: Deactivated successfully. Apr 21 10:20:28.825124 kubelet[2671]: I0421 10:20:28.825055 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578qj\" (UniqueName: \"kubernetes.io/projected/39d404b5-22a7-43b4-8197-a96841dc8873-kube-api-access-578qj\") pod \"39d404b5-22a7-43b4-8197-a96841dc8873\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " Apr 21 10:20:28.825414 kubelet[2671]: I0421 10:20:28.825153 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-nginx-config\") pod \"39d404b5-22a7-43b4-8197-a96841dc8873\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " Apr 21 10:20:28.825414 kubelet[2671]: I0421 10:20:28.825193 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-ca-bundle\") pod \"39d404b5-22a7-43b4-8197-a96841dc8873\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " Apr 21 10:20:28.825414 kubelet[2671]: I0421 10:20:28.825206 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-backend-key-pair\") pod \"39d404b5-22a7-43b4-8197-a96841dc8873\" (UID: \"39d404b5-22a7-43b4-8197-a96841dc8873\") " Apr 21 10:20:28.825817 kubelet[2671]: I0421 10:20:28.825766 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "39d404b5-22a7-43b4-8197-a96841dc8873" (UID: "39d404b5-22a7-43b4-8197-a96841dc8873"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:20:28.825817 kubelet[2671]: I0421 10:20:28.825792 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "39d404b5-22a7-43b4-8197-a96841dc8873" (UID: "39d404b5-22a7-43b4-8197-a96841dc8873"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:20:28.828177 kubelet[2671]: I0421 10:20:28.828143 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "39d404b5-22a7-43b4-8197-a96841dc8873" (UID: "39d404b5-22a7-43b4-8197-a96841dc8873"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:20:28.829035 kubelet[2671]: I0421 10:20:28.829016 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d404b5-22a7-43b4-8197-a96841dc8873-kube-api-access-578qj" (OuterVolumeSpecName: "kube-api-access-578qj") pod "39d404b5-22a7-43b4-8197-a96841dc8873" (UID: "39d404b5-22a7-43b4-8197-a96841dc8873"). InnerVolumeSpecName "kube-api-access-578qj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:20:28.862753 systemd-networkd[1260]: calif6de2a3c5fb: Link UP Apr 21 10:20:28.862979 systemd-networkd[1260]: calif6de2a3c5fb: Gained carrier Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.777 [ERROR][4094] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.788 [INFO][4094] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dpxdk-eth0 csi-node-driver- calico-system e073554a-e6ab-44ff-a032-f5d7862b4ec3 990 0 2026-04-21 10:20:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dpxdk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif6de2a3c5fb [] [] }} ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.788 [INFO][4094] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.815 [INFO][4107] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" HandleID="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.823 [INFO][4107] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" HandleID="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000509ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dpxdk", "timestamp":"2026-04-21 10:20:28.815131775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e6dc0)} Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.823 [INFO][4107] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.824 [INFO][4107] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.824 [INFO][4107] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.828 [INFO][4107] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.833 [INFO][4107] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.837 [INFO][4107] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.839 [INFO][4107] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.841 [INFO][4107] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.841 [INFO][4107] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.842 [INFO][4107] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4 Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.845 [INFO][4107] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.850 [INFO][4107] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.850 [INFO][4107] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" host="localhost" Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.850 [INFO][4107] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:28.882268 containerd[1573]: 2026-04-21 10:20:28.850 [INFO][4107] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" HandleID="k8s-pod-network.58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.883277 containerd[1573]: 2026-04-21 10:20:28.852 [INFO][4094] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpxdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e073554a-e6ab-44ff-a032-f5d7862b4ec3", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dpxdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6de2a3c5fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:28.883277 containerd[1573]: 2026-04-21 10:20:28.852 [INFO][4094] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.883277 containerd[1573]: 2026-04-21 10:20:28.852 [INFO][4094] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6de2a3c5fb ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.883277 containerd[1573]: 2026-04-21 10:20:28.861 [INFO][4094] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.883277 containerd[1573]: 2026-04-21 10:20:28.861 [INFO][4094] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpxdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e073554a-e6ab-44ff-a032-f5d7862b4ec3", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4", Pod:"csi-node-driver-dpxdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6de2a3c5fb", MAC:"6e:3f:2a:d7:bb:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:28.883277 containerd[1573]: 2026-04-21 10:20:28.877 [INFO][4094] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4" Namespace="calico-system" Pod="csi-node-driver-dpxdk" WorkloadEndpoint="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:28.901475 containerd[1573]: time="2026-04-21T10:20:28.901358969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:28.902200 containerd[1573]: time="2026-04-21T10:20:28.902056585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:28.902200 containerd[1573]: time="2026-04-21T10:20:28.902125057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:28.902312 containerd[1573]: time="2026-04-21T10:20:28.902249880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:28.924338 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:28.926270 kubelet[2671]: I0421 10:20:28.926150 2671 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 21 10:20:28.926270 kubelet[2671]: I0421 10:20:28.926175 2671 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39d404b5-22a7-43b4-8197-a96841dc8873-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 21 10:20:28.926270 kubelet[2671]: I0421 10:20:28.926185 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-578qj\" (UniqueName: \"kubernetes.io/projected/39d404b5-22a7-43b4-8197-a96841dc8873-kube-api-access-578qj\") on node \"localhost\" DevicePath \"\"" Apr 21 10:20:28.926270 kubelet[2671]: I0421 10:20:28.926194 2671 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39d404b5-22a7-43b4-8197-a96841dc8873-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 21 10:20:28.936174 containerd[1573]: time="2026-04-21T10:20:28.936137348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dpxdk,Uid:e073554a-e6ab-44ff-a032-f5d7862b4ec3,Namespace:calico-system,Attempt:1,} returns sandbox id \"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4\"" Apr 21 10:20:28.938031 containerd[1573]: time="2026-04-21T10:20:28.937808235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:20:29.275055 systemd[1]: var-lib-kubelet-pods-39d404b5\x2d22a7\x2d43b4\x2d8197\x2da96841dc8873-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d578qj.mount: Deactivated successfully. Apr 21 10:20:29.275183 systemd[1]: var-lib-kubelet-pods-39d404b5\x2d22a7\x2d43b4\x2d8197\x2da96841dc8873-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:20:29.618078 kubelet[2671]: I0421 10:20:29.618022 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:20:29.835271 kubelet[2671]: I0421 10:20:29.835188 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/89fe0107-26e9-4134-b2cb-f00c37b4fbe3-nginx-config\") pod \"whisker-7f855674cf-tth9x\" (UID: \"89fe0107-26e9-4134-b2cb-f00c37b4fbe3\") " pod="calico-system/whisker-7f855674cf-tth9x" Apr 21 10:20:29.835271 kubelet[2671]: I0421 10:20:29.835266 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89fe0107-26e9-4134-b2cb-f00c37b4fbe3-whisker-ca-bundle\") pod \"whisker-7f855674cf-tth9x\" (UID: \"89fe0107-26e9-4134-b2cb-f00c37b4fbe3\") " pod="calico-system/whisker-7f855674cf-tth9x" Apr 21 10:20:29.835271 kubelet[2671]: I0421 10:20:29.835288 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mlf4\" (UniqueName: \"kubernetes.io/projected/89fe0107-26e9-4134-b2cb-f00c37b4fbe3-kube-api-access-8mlf4\") pod \"whisker-7f855674cf-tth9x\" (UID: \"89fe0107-26e9-4134-b2cb-f00c37b4fbe3\") " pod="calico-system/whisker-7f855674cf-tth9x" Apr 21 10:20:29.835570 kubelet[2671]: I0421 10:20:29.835313 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89fe0107-26e9-4134-b2cb-f00c37b4fbe3-whisker-backend-key-pair\") pod \"whisker-7f855674cf-tth9x\" (UID: \"89fe0107-26e9-4134-b2cb-f00c37b4fbe3\") " pod="calico-system/whisker-7f855674cf-tth9x" Apr 21 10:20:29.889858 systemd-networkd[1260]: calif6de2a3c5fb: Gained IPv6LL Apr 21 10:20:30.009742 containerd[1573]: time="2026-04-21T10:20:30.009654903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f855674cf-tth9x,Uid:89fe0107-26e9-4134-b2cb-f00c37b4fbe3,Namespace:calico-system,Attempt:0,}" Apr 21 10:20:30.143450 systemd-networkd[1260]: cali8da54b719d1: Link UP Apr 21 10:20:30.144152 systemd-networkd[1260]: cali8da54b719d1: Gained carrier Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.057 [ERROR][4279] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.071 [INFO][4279] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f855674cf--tth9x-eth0 whisker-7f855674cf- calico-system 89fe0107-26e9-4134-b2cb-f00c37b4fbe3 1013 0 2026-04-21 10:20:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f855674cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f855674cf-tth9x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8da54b719d1 [] [] }} ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.071 [INFO][4279] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.096 [INFO][4292] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" HandleID="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Workload="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.106 [INFO][4292] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" HandleID="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Workload="localhost-k8s-whisker--7f855674cf--tth9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f855674cf-tth9x", "timestamp":"2026-04-21 10:20:30.096730613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000452dc0)} Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.107 [INFO][4292] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.107 [INFO][4292] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.107 [INFO][4292] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.110 [INFO][4292] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.114 [INFO][4292] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.118 [INFO][4292] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.120 [INFO][4292] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.126 [INFO][4292] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.126 [INFO][4292] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.128 [INFO][4292] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776 Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.134 [INFO][4292] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.139 [INFO][4292] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.139 [INFO][4292] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" host="localhost" Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.139 [INFO][4292] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:30.155979 containerd[1573]: 2026-04-21 10:20:30.139 [INFO][4292] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" HandleID="k8s-pod-network.2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Workload="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.156475 containerd[1573]: 2026-04-21 10:20:30.141 [INFO][4279] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f855674cf--tth9x-eth0", GenerateName:"whisker-7f855674cf-", Namespace:"calico-system", SelfLink:"", UID:"89fe0107-26e9-4134-b2cb-f00c37b4fbe3", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f855674cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f855674cf-tth9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8da54b719d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:30.156475 containerd[1573]: 2026-04-21 10:20:30.141 [INFO][4279] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.156475 containerd[1573]: 2026-04-21 10:20:30.141 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8da54b719d1 ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.156475 containerd[1573]: 2026-04-21 10:20:30.143 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.156475 containerd[1573]: 2026-04-21 10:20:30.144 [INFO][4279] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f855674cf--tth9x-eth0", GenerateName:"whisker-7f855674cf-", Namespace:"calico-system", SelfLink:"", UID:"89fe0107-26e9-4134-b2cb-f00c37b4fbe3", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f855674cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776", Pod:"whisker-7f855674cf-tth9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8da54b719d1", MAC:"42:81:ae:4d:84:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:30.156475 containerd[1573]: 2026-04-21 10:20:30.153 [INFO][4279] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776" Namespace="calico-system" Pod="whisker-7f855674cf-tth9x" WorkloadEndpoint="localhost-k8s-whisker--7f855674cf--tth9x-eth0" Apr 21 10:20:30.171059 containerd[1573]: time="2026-04-21T10:20:30.170990812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:30.171171 containerd[1573]: time="2026-04-21T10:20:30.171045557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:30.171171 containerd[1573]: time="2026-04-21T10:20:30.171131797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:30.172191 containerd[1573]: time="2026-04-21T10:20:30.172131006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:30.196761 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:30.222994 containerd[1573]: time="2026-04-21T10:20:30.222950095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f855674cf-tth9x,Uid:89fe0107-26e9-4134-b2cb-f00c37b4fbe3,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776\"" Apr 21 10:20:30.382925 kubelet[2671]: I0421 10:20:30.382824 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d404b5-22a7-43b4-8197-a96841dc8873" path="/var/lib/kubelet/pods/39d404b5-22a7-43b4-8197-a96841dc8873/volumes" Apr 21 10:20:30.752087 containerd[1573]: time="2026-04-21T10:20:30.752015172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:30.753292 containerd[1573]: time="2026-04-21T10:20:30.753209408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:20:30.755024 containerd[1573]: time="2026-04-21T10:20:30.754971231Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:30.758081 containerd[1573]: time="2026-04-21T10:20:30.758009999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:30.759187 containerd[1573]: time="2026-04-21T10:20:30.759142231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.821308323s" Apr 21 10:20:30.759187 containerd[1573]: time="2026-04-21T10:20:30.759183383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:20:30.761166 containerd[1573]: time="2026-04-21T10:20:30.761099027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:20:30.770251 containerd[1573]: time="2026-04-21T10:20:30.770173942Z" level=info msg="CreateContainer within sandbox \"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:20:30.788288 containerd[1573]: time="2026-04-21T10:20:30.788217476Z" level=info msg="CreateContainer within sandbox \"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d563051bf713dffe8585520edc35091fa4427930d3011209ed0602cd89b88a9\"" Apr 21 10:20:30.789037 containerd[1573]: time="2026-04-21T10:20:30.789013513Z" level=info msg="StartContainer for \"2d563051bf713dffe8585520edc35091fa4427930d3011209ed0602cd89b88a9\"" Apr 21 10:20:30.822780 systemd[1]: run-containerd-runc-k8s.io-2d563051bf713dffe8585520edc35091fa4427930d3011209ed0602cd89b88a9-runc.vrmrqi.mount: Deactivated successfully. Apr 21 10:20:30.850202 containerd[1573]: time="2026-04-21T10:20:30.850159414Z" level=info msg="StartContainer for \"2d563051bf713dffe8585520edc35091fa4427930d3011209ed0602cd89b88a9\" returns successfully" Apr 21 10:20:32.067062 systemd-networkd[1260]: cali8da54b719d1: Gained IPv6LL Apr 21 10:20:32.653459 containerd[1573]: time="2026-04-21T10:20:32.653384778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:32.654109 containerd[1573]: time="2026-04-21T10:20:32.654043880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:20:32.655148 containerd[1573]: time="2026-04-21T10:20:32.655106279Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:32.658153 containerd[1573]: time="2026-04-21T10:20:32.658122317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:32.658584 containerd[1573]: time="2026-04-21T10:20:32.658560434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.897423712s" Apr 21 10:20:32.658613 containerd[1573]: time="2026-04-21T10:20:32.658590604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:20:32.659730 containerd[1573]: time="2026-04-21T10:20:32.659492529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:20:32.662812 containerd[1573]: time="2026-04-21T10:20:32.662782881Z" level=info msg="CreateContainer within sandbox \"2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:20:32.678885 containerd[1573]: time="2026-04-21T10:20:32.678843764Z" level=info msg="CreateContainer within sandbox \"2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7306779ac3aa308023ba642f045bb4d3566d179b185bc6cfed19e2cfb5c4b2fc\"" Apr 21 10:20:32.679605 containerd[1573]: time="2026-04-21T10:20:32.679575394Z" level=info msg="StartContainer for \"7306779ac3aa308023ba642f045bb4d3566d179b185bc6cfed19e2cfb5c4b2fc\"" Apr 21 10:20:32.687350 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:54204.service - OpenSSH per-connection server daemon (10.0.0.1:54204). Apr 21 10:20:32.723393 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 54204 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:32.724765 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:32.728871 systemd-logind[1561]: New session 9 of user core. Apr 21 10:20:32.733263 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:20:32.739298 containerd[1573]: time="2026-04-21T10:20:32.739256575Z" level=info msg="StartContainer for \"7306779ac3aa308023ba642f045bb4d3566d179b185bc6cfed19e2cfb5c4b2fc\" returns successfully" Apr 21 10:20:32.849999 sshd[4465]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:32.852627 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:54204.service: Deactivated successfully. Apr 21 10:20:32.854326 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:20:32.854433 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:20:32.855347 systemd-logind[1561]: Removed session 9. Apr 21 10:20:33.490533 kubelet[2671]: I0421 10:20:33.490281 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:20:33.491083 kubelet[2671]: E0421 10:20:33.490746 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:33.631017 kubelet[2671]: E0421 10:20:33.630961 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:34.564954 kernel: calico-node[4565]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:20:34.891633 containerd[1573]: time="2026-04-21T10:20:34.891451765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:34.892345 containerd[1573]: time="2026-04-21T10:20:34.892218400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:20:34.893380 containerd[1573]: time="2026-04-21T10:20:34.893322354Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:34.895856 containerd[1573]: time="2026-04-21T10:20:34.895825717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:34.896211 containerd[1573]: time="2026-04-21T10:20:34.896177440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.236664997s" Apr 21 10:20:34.896245 containerd[1573]: time="2026-04-21T10:20:34.896215979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:20:34.899191 containerd[1573]: time="2026-04-21T10:20:34.899143136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:20:34.907854 containerd[1573]: time="2026-04-21T10:20:34.906829681Z" level=info msg="CreateContainer within sandbox \"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:20:34.927581 containerd[1573]: time="2026-04-21T10:20:34.927521045Z" level=info msg="CreateContainer within sandbox \"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8066c3c8233b16ea400dd2583a8a40fa82b72717ad4227a4560e3328f9f10929\"" Apr 21 10:20:34.928140 containerd[1573]: time="2026-04-21T10:20:34.928114843Z" level=info msg="StartContainer for \"8066c3c8233b16ea400dd2583a8a40fa82b72717ad4227a4560e3328f9f10929\"" Apr 21 10:20:34.960026 systemd-networkd[1260]: vxlan.calico: Link UP Apr 21 10:20:34.960032 systemd-networkd[1260]: vxlan.calico: Gained carrier Apr 21 10:20:35.002809 containerd[1573]: time="2026-04-21T10:20:35.002763556Z" level=info msg="StartContainer for \"8066c3c8233b16ea400dd2583a8a40fa82b72717ad4227a4560e3328f9f10929\" returns successfully" Apr 21 10:20:35.229604 kubelet[2671]: I0421 10:20:35.229458 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:20:35.440597 kubelet[2671]: I0421 10:20:35.440544 2671 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:20:35.441586 kubelet[2671]: I0421 10:20:35.441552 2671 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:20:35.672996 kubelet[2671]: I0421 10:20:35.672887 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dpxdk" podStartSLOduration=22.711831253 podStartE2EDuration="28.672874482s" podCreationTimestamp="2026-04-21 10:20:07 +0000 UTC" firstStartedPulling="2026-04-21 10:20:28.9375942 +0000 UTC m=+36.658049050" lastFinishedPulling="2026-04-21 10:20:34.898637429 +0000 UTC m=+42.619092279" observedRunningTime="2026-04-21 10:20:35.67261843 +0000 UTC m=+43.393073299" watchObservedRunningTime="2026-04-21 10:20:35.672874482 +0000 UTC m=+43.393329343" Apr 21 10:20:36.482211 systemd-networkd[1260]: vxlan.calico: Gained IPv6LL Apr 21 10:20:36.977518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289084349.mount: Deactivated successfully. Apr 21 10:20:37.004010 containerd[1573]: time="2026-04-21T10:20:37.003787074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:37.005387 containerd[1573]: time="2026-04-21T10:20:37.005297632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:20:37.007038 containerd[1573]: time="2026-04-21T10:20:37.006989333Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:37.012087 containerd[1573]: time="2026-04-21T10:20:37.011993964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:37.013272 containerd[1573]: time="2026-04-21T10:20:37.013234706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.114068154s" Apr 21 10:20:37.013370 containerd[1573]: time="2026-04-21T10:20:37.013272578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:20:37.022397 containerd[1573]: time="2026-04-21T10:20:37.022319683Z" level=info msg="CreateContainer within sandbox \"2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:20:37.041300 containerd[1573]: time="2026-04-21T10:20:37.041238699Z" level=info msg="CreateContainer within sandbox \"2c5ca8c4caa311bbe78a652a3bcf72c56fd0bdaf297428ef3d717fa77073e776\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a1a6cdb70fafd377e9afb75ba030f5c44da3e9a522ec50e4866d1f71e83371e\"" Apr 21 10:20:37.042288 containerd[1573]: time="2026-04-21T10:20:37.042211946Z" level=info msg="StartContainer for \"7a1a6cdb70fafd377e9afb75ba030f5c44da3e9a522ec50e4866d1f71e83371e\"" Apr 21 10:20:37.120430 containerd[1573]: time="2026-04-21T10:20:37.119654866Z" level=info msg="StartContainer for \"7a1a6cdb70fafd377e9afb75ba030f5c44da3e9a522ec50e4866d1f71e83371e\" returns successfully" Apr 21 10:20:37.664716 kubelet[2671]: I0421 10:20:37.664588 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f855674cf-tth9x" podStartSLOduration=1.8745116309999998 podStartE2EDuration="8.664562566s" podCreationTimestamp="2026-04-21 10:20:29 +0000 UTC" firstStartedPulling="2026-04-21 10:20:30.22402878 +0000 UTC m=+37.944483631" lastFinishedPulling="2026-04-21 10:20:37.014079717 +0000 UTC m=+44.734534566" observedRunningTime="2026-04-21 10:20:37.664265139 +0000 UTC m=+45.384720027" watchObservedRunningTime="2026-04-21 10:20:37.664562566 +0000 UTC m=+45.385017432" Apr 21 10:20:37.813555 systemd[1]: run-containerd-runc-k8s.io-7a1a6cdb70fafd377e9afb75ba030f5c44da3e9a522ec50e4866d1f71e83371e-runc.TBeJgp.mount: Deactivated successfully. Apr 21 10:20:37.864718 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:54220.service - OpenSSH per-connection server daemon (10.0.0.1:54220). Apr 21 10:20:37.907285 sshd[4819]: Accepted publickey for core from 10.0.0.1 port 54220 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:37.908764 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:37.912968 systemd-logind[1561]: New session 10 of user core. Apr 21 10:20:37.926230 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:20:38.050081 sshd[4819]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:38.052926 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:54220.service: Deactivated successfully. Apr 21 10:20:38.054612 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:20:38.054630 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:20:38.055767 systemd-logind[1561]: Removed session 10. Apr 21 10:20:38.394578 containerd[1573]: time="2026-04-21T10:20:38.392297017Z" level=info msg="StopPodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\"" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.506 [INFO][4848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.507 [INFO][4848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" iface="eth0" netns="/var/run/netns/cni-59cbcc5e-208b-750d-2855-863978673b50" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.507 [INFO][4848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" iface="eth0" netns="/var/run/netns/cni-59cbcc5e-208b-750d-2855-863978673b50" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.507 [INFO][4848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" iface="eth0" netns="/var/run/netns/cni-59cbcc5e-208b-750d-2855-863978673b50" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.507 [INFO][4848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.507 [INFO][4848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.533 [INFO][4857] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.533 [INFO][4857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.533 [INFO][4857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.545 [WARNING][4857] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.545 [INFO][4857] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.547 [INFO][4857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:38.552387 containerd[1573]: 2026-04-21 10:20:38.549 [INFO][4848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:38.553123 containerd[1573]: time="2026-04-21T10:20:38.552725132Z" level=info msg="TearDown network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" successfully" Apr 21 10:20:38.553123 containerd[1573]: time="2026-04-21T10:20:38.552781084Z" level=info msg="StopPodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" returns successfully" Apr 21 10:20:38.555313 kubelet[2671]: E0421 10:20:38.555279 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:38.555861 systemd[1]: run-netns-cni\x2d59cbcc5e\x2d208b\x2d750d\x2d2855\x2d863978673b50.mount: Deactivated successfully. Apr 21 10:20:38.557610 containerd[1573]: time="2026-04-21T10:20:38.556019285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrn9t,Uid:de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87,Namespace:kube-system,Attempt:1,}" Apr 21 10:20:38.692650 systemd-networkd[1260]: cali95c343dc367: Link UP Apr 21 10:20:38.692794 systemd-networkd[1260]: cali95c343dc367: Gained carrier Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.622 [INFO][4866] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0 coredns-674b8bbfcf- kube-system de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87 1095 0 2026-04-21 10:19:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qrn9t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95c343dc367 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.623 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.647 [INFO][4880] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" HandleID="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.657 [INFO][4880] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" HandleID="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f840), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qrn9t", "timestamp":"2026-04-21 10:20:38.647986765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000386420)} Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.657 [INFO][4880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.657 [INFO][4880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.657 [INFO][4880] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.661 [INFO][4880] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.665 [INFO][4880] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.669 [INFO][4880] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.671 [INFO][4880] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.673 [INFO][4880] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.673 [INFO][4880] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.675 [INFO][4880] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8 Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.680 [INFO][4880] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.687 [INFO][4880] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.687 [INFO][4880] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" host="localhost" Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.687 [INFO][4880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:38.706295 containerd[1573]: 2026-04-21 10:20:38.687 [INFO][4880] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" HandleID="k8s-pod-network.e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.706711 containerd[1573]: 2026-04-21 10:20:38.689 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qrn9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c343dc367", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:38.706711 containerd[1573]: 2026-04-21 10:20:38.689 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.706711 containerd[1573]: 2026-04-21 10:20:38.689 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95c343dc367 ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.706711 containerd[1573]: 2026-04-21 10:20:38.691 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.706711 containerd[1573]: 2026-04-21 10:20:38.691 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8", Pod:"coredns-674b8bbfcf-qrn9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c343dc367", MAC:"e2:ae:fe:61:32:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:38.706711 containerd[1573]: 2026-04-21 10:20:38.703 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrn9t" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:38.723487 containerd[1573]: time="2026-04-21T10:20:38.723211658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:38.723487 containerd[1573]: time="2026-04-21T10:20:38.723262011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:38.723487 containerd[1573]: time="2026-04-21T10:20:38.723270889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:38.723487 containerd[1573]: time="2026-04-21T10:20:38.723351337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:38.748928 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:38.774591 containerd[1573]: time="2026-04-21T10:20:38.774561568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrn9t,Uid:de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87,Namespace:kube-system,Attempt:1,} returns sandbox id \"e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8\"" Apr 21 10:20:38.775430 kubelet[2671]: E0421 10:20:38.775399 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:38.781654 containerd[1573]: time="2026-04-21T10:20:38.781594161Z" level=info msg="CreateContainer within sandbox \"e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:20:38.803808 containerd[1573]: time="2026-04-21T10:20:38.803663382Z" level=info msg="CreateContainer within sandbox \"e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9913a1eaf64c5737af5902d66d0957d68dff73707b6f7de08ecff5e867c433da\"" Apr 21 10:20:38.804331 containerd[1573]: time="2026-04-21T10:20:38.804283440Z" level=info msg="StartContainer for \"9913a1eaf64c5737af5902d66d0957d68dff73707b6f7de08ecff5e867c433da\"" Apr 21 10:20:38.848833 containerd[1573]: time="2026-04-21T10:20:38.848720267Z" level=info msg="StartContainer for \"9913a1eaf64c5737af5902d66d0957d68dff73707b6f7de08ecff5e867c433da\" returns successfully" Apr 21 10:20:39.654251 kubelet[2671]: E0421 10:20:39.654211 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:39.668999 kubelet[2671]: I0421 10:20:39.668668 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qrn9t" podStartSLOduration=41.668650521 podStartE2EDuration="41.668650521s" podCreationTimestamp="2026-04-21 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:20:39.668195918 +0000 UTC m=+47.388650788" watchObservedRunningTime="2026-04-21 10:20:39.668650521 +0000 UTC m=+47.389105382" Apr 21 10:20:40.382393 containerd[1573]: time="2026-04-21T10:20:40.382299816Z" level=info msg="StopPodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\"" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.437 [INFO][5015] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.437 [INFO][5015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" iface="eth0" netns="/var/run/netns/cni-642d603f-ab19-c6d8-28b9-22e303d60992" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.438 [INFO][5015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" iface="eth0" netns="/var/run/netns/cni-642d603f-ab19-c6d8-28b9-22e303d60992" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.441 [INFO][5015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" iface="eth0" netns="/var/run/netns/cni-642d603f-ab19-c6d8-28b9-22e303d60992" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.441 [INFO][5015] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.441 [INFO][5015] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.464 [INFO][5024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.464 [INFO][5024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.464 [INFO][5024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.469 [WARNING][5024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.469 [INFO][5024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.471 [INFO][5024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:40.474308 containerd[1573]: 2026-04-21 10:20:40.472 [INFO][5015] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:40.474691 containerd[1573]: time="2026-04-21T10:20:40.474653246Z" level=info msg="TearDown network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" successfully" Apr 21 10:20:40.474691 containerd[1573]: time="2026-04-21T10:20:40.474682958Z" level=info msg="StopPodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" returns successfully" Apr 21 10:20:40.475450 containerd[1573]: time="2026-04-21T10:20:40.475403930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l8wx5,Uid:520d03aa-bd03-4c5c-9f6e-49911f08321d,Namespace:calico-system,Attempt:1,}" Apr 21 10:20:40.476690 systemd[1]: run-netns-cni\x2d642d603f\x2dab19\x2dc6d8\x2d28b9\x2d22e303d60992.mount: Deactivated successfully. Apr 21 10:20:40.581105 systemd-networkd[1260]: cali95c343dc367: Gained IPv6LL Apr 21 10:20:40.615396 systemd-networkd[1260]: cali0357d9eff29: Link UP Apr 21 10:20:40.617086 systemd-networkd[1260]: cali0357d9eff29: Gained carrier Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.530 [INFO][5032] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--l8wx5-eth0 goldmane-5b85766d88- calico-system 520d03aa-bd03-4c5c-9f6e-49911f08321d 1122 0 2026-04-21 10:20:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-l8wx5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0357d9eff29 [] [] }} ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.531 [INFO][5032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.560 [INFO][5045] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" HandleID="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.570 [INFO][5045] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" HandleID="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058bb10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-l8wx5", "timestamp":"2026-04-21 10:20:40.560679891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ac580)} Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.570 [INFO][5045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.570 [INFO][5045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.570 [INFO][5045] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.574 [INFO][5045] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.579 [INFO][5045] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.583 [INFO][5045] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.585 [INFO][5045] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.587 [INFO][5045] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.588 [INFO][5045] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.592 [INFO][5045] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81 Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.601 [INFO][5045] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.611 [INFO][5045] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.611 [INFO][5045] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" host="localhost" Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.611 [INFO][5045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:40.629645 containerd[1573]: 2026-04-21 10:20:40.611 [INFO][5045] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" HandleID="k8s-pod-network.7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.630410 containerd[1573]: 2026-04-21 10:20:40.613 [INFO][5032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--l8wx5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"520d03aa-bd03-4c5c-9f6e-49911f08321d", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-l8wx5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0357d9eff29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:40.630410 containerd[1573]: 2026-04-21 10:20:40.613 [INFO][5032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.630410 containerd[1573]: 2026-04-21 10:20:40.613 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0357d9eff29 ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.630410 containerd[1573]: 2026-04-21 10:20:40.617 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.630410 containerd[1573]: 2026-04-21 10:20:40.617 [INFO][5032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--l8wx5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"520d03aa-bd03-4c5c-9f6e-49911f08321d", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81", Pod:"goldmane-5b85766d88-l8wx5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0357d9eff29", MAC:"7a:6b:68:c5:51:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:40.630410 containerd[1573]: 2026-04-21 10:20:40.627 [INFO][5032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81" Namespace="calico-system" Pod="goldmane-5b85766d88-l8wx5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:40.657980 containerd[1573]: time="2026-04-21T10:20:40.657419664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:40.657980 containerd[1573]: time="2026-04-21T10:20:40.657520304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:40.657980 containerd[1573]: time="2026-04-21T10:20:40.657532048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:40.658232 kubelet[2671]: E0421 10:20:40.657698 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:40.658594 containerd[1573]: time="2026-04-21T10:20:40.657802964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:40.725281 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:40.754490 containerd[1573]: time="2026-04-21T10:20:40.754437250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-l8wx5,Uid:520d03aa-bd03-4c5c-9f6e-49911f08321d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81\"" Apr 21 10:20:40.755945 containerd[1573]: time="2026-04-21T10:20:40.755834800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:20:41.382692 containerd[1573]: time="2026-04-21T10:20:41.382595493Z" level=info msg="StopPodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\"" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.458 [INFO][5131] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.458 [INFO][5131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" iface="eth0" netns="/var/run/netns/cni-3436529e-5b30-1d0e-f9b3-60fc17a7edda" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.459 [INFO][5131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" iface="eth0" netns="/var/run/netns/cni-3436529e-5b30-1d0e-f9b3-60fc17a7edda" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.459 [INFO][5131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" iface="eth0" netns="/var/run/netns/cni-3436529e-5b30-1d0e-f9b3-60fc17a7edda" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.459 [INFO][5131] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.459 [INFO][5131] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.479 [INFO][5139] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.480 [INFO][5139] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.480 [INFO][5139] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.487 [WARNING][5139] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.487 [INFO][5139] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.488 [INFO][5139] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:41.491581 containerd[1573]: 2026-04-21 10:20:41.489 [INFO][5131] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:41.491988 containerd[1573]: time="2026-04-21T10:20:41.491857399Z" level=info msg="TearDown network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" successfully" Apr 21 10:20:41.491988 containerd[1573]: time="2026-04-21T10:20:41.491878308Z" level=info msg="StopPodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" returns successfully" Apr 21 10:20:41.492596 containerd[1573]: time="2026-04-21T10:20:41.492566618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-9ncj6,Uid:301078df-91a6-4fff-b66d-08ba5d84899e,Namespace:calico-system,Attempt:1,}" Apr 21 10:20:41.493978 systemd[1]: run-netns-cni\x2d3436529e\x2d5b30\x2d1d0e\x2df9b3\x2d60fc17a7edda.mount: Deactivated successfully. Apr 21 10:20:41.595303 systemd-networkd[1260]: cali983568ad23f: Link UP Apr 21 10:20:41.596390 systemd-networkd[1260]: cali983568ad23f: Gained carrier Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.533 [INFO][5147] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0 calico-apiserver-57b589b74f- calico-system 301078df-91a6-4fff-b66d-08ba5d84899e 1131 0 2026-04-21 10:20:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b589b74f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57b589b74f-9ncj6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali983568ad23f [] [] }} ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.533 [INFO][5147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.555 [INFO][5161] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" HandleID="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.561 [INFO][5161] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" HandleID="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-57b589b74f-9ncj6", "timestamp":"2026-04-21 10:20:41.555612886 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00035e6e0)} Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.561 [INFO][5161] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.561 [INFO][5161] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.561 [INFO][5161] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.563 [INFO][5161] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.567 [INFO][5161] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.571 [INFO][5161] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.573 [INFO][5161] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.575 [INFO][5161] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.575 [INFO][5161] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.576 [INFO][5161] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.583 [INFO][5161] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.590 [INFO][5161] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.591 [INFO][5161] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" host="localhost" Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.591 [INFO][5161] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:41.614316 containerd[1573]: 2026-04-21 10:20:41.591 [INFO][5161] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" HandleID="k8s-pod-network.f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.614803 containerd[1573]: 2026-04-21 10:20:41.592 [INFO][5147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"301078df-91a6-4fff-b66d-08ba5d84899e", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57b589b74f-9ncj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali983568ad23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:41.614803 containerd[1573]: 2026-04-21 10:20:41.592 [INFO][5147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.614803 containerd[1573]: 2026-04-21 10:20:41.592 [INFO][5147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali983568ad23f ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.614803 containerd[1573]: 2026-04-21 10:20:41.596 [INFO][5147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.614803 containerd[1573]: 2026-04-21 10:20:41.599 [INFO][5147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"301078df-91a6-4fff-b66d-08ba5d84899e", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a", Pod:"calico-apiserver-57b589b74f-9ncj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali983568ad23f", MAC:"52:fe:4b:32:36:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:41.614803 containerd[1573]: 2026-04-21 10:20:41.611 [INFO][5147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-9ncj6" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:41.631288 containerd[1573]: time="2026-04-21T10:20:41.631201777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:41.631390 containerd[1573]: time="2026-04-21T10:20:41.631315483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:41.631390 containerd[1573]: time="2026-04-21T10:20:41.631360804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:41.631606 containerd[1573]: time="2026-04-21T10:20:41.631544870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:41.650120 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:41.660782 kubelet[2671]: E0421 10:20:41.660716 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:41.674890 containerd[1573]: time="2026-04-21T10:20:41.674863590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-9ncj6,Uid:301078df-91a6-4fff-b66d-08ba5d84899e,Namespace:calico-system,Attempt:1,} returns sandbox id \"f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a\"" Apr 21 10:20:42.177270 systemd-networkd[1260]: cali0357d9eff29: Gained IPv6LL Apr 21 10:20:42.387245 containerd[1573]: time="2026-04-21T10:20:42.386849928Z" level=info msg="StopPodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\"" Apr 21 10:20:42.391866 containerd[1573]: time="2026-04-21T10:20:42.388095214Z" level=info msg="StopPodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\"" Apr 21 10:20:42.391866 containerd[1573]: time="2026-04-21T10:20:42.388123987Z" level=info msg="StopPodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\"" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.529 [INFO][5262] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.530 [INFO][5262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" iface="eth0" netns="/var/run/netns/cni-67ed456c-c9c1-60b6-d8b6-7b69196cc392" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.533 [INFO][5262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" iface="eth0" netns="/var/run/netns/cni-67ed456c-c9c1-60b6-d8b6-7b69196cc392" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.533 [INFO][5262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" iface="eth0" netns="/var/run/netns/cni-67ed456c-c9c1-60b6-d8b6-7b69196cc392" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.533 [INFO][5262] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.533 [INFO][5262] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.581 [INFO][5286] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.582 [INFO][5286] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.582 [INFO][5286] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.593 [WARNING][5286] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.593 [INFO][5286] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.597 [INFO][5286] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:42.611854 containerd[1573]: 2026-04-21 10:20:42.606 [INFO][5262] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:42.617627 systemd[1]: run-netns-cni\x2d67ed456c\x2dc9c1\x2d60b6\x2dd8b6\x2d7b69196cc392.mount: Deactivated successfully. Apr 21 10:20:42.620688 containerd[1573]: time="2026-04-21T10:20:42.620652331Z" level=info msg="TearDown network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" successfully" Apr 21 10:20:42.620989 containerd[1573]: time="2026-04-21T10:20:42.620838847Z" level=info msg="StopPodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" returns successfully" Apr 21 10:20:42.622072 containerd[1573]: time="2026-04-21T10:20:42.622034356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884ccd5b8-w5mwb,Uid:bf5f2a04-0539-42fa-a71e-30dc3c2207a4,Namespace:calico-system,Attempt:1,}" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.544 [INFO][5263] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.545 [INFO][5263] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" iface="eth0" netns="/var/run/netns/cni-eb2ca6d6-6747-c0cd-61ca-ec0a3737601c" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.546 [INFO][5263] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" iface="eth0" netns="/var/run/netns/cni-eb2ca6d6-6747-c0cd-61ca-ec0a3737601c" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.548 [INFO][5263] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" iface="eth0" netns="/var/run/netns/cni-eb2ca6d6-6747-c0cd-61ca-ec0a3737601c" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.548 [INFO][5263] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.548 [INFO][5263] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.617 [INFO][5293] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.618 [INFO][5293] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.618 [INFO][5293] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.636 [WARNING][5293] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.636 [INFO][5293] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.639 [INFO][5293] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:42.660502 containerd[1573]: 2026-04-21 10:20:42.645 [INFO][5263] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:42.661254 containerd[1573]: time="2026-04-21T10:20:42.661215034Z" level=info msg="TearDown network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" successfully" Apr 21 10:20:42.661254 containerd[1573]: time="2026-04-21T10:20:42.661250266Z" level=info msg="StopPodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" returns successfully" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.593 [INFO][5261] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.596 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" iface="eth0" netns="/var/run/netns/cni-197cd777-a570-5fb4-545a-f8717e7da980" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.597 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" iface="eth0" netns="/var/run/netns/cni-197cd777-a570-5fb4-545a-f8717e7da980" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.597 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" iface="eth0" netns="/var/run/netns/cni-197cd777-a570-5fb4-545a-f8717e7da980" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.597 [INFO][5261] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.597 [INFO][5261] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.630 [INFO][5303] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.630 [INFO][5303] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.638 [INFO][5303] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.649 [WARNING][5303] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.649 [INFO][5303] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.653 [INFO][5303] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:42.661750 containerd[1573]: 2026-04-21 10:20:42.657 [INFO][5261] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:42.663411 kubelet[2671]: E0421 10:20:42.663358 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:42.668601 systemd[1]: run-netns-cni\x2deb2ca6d6\x2d6747\x2dc0cd\x2d61ca\x2dec0a3737601c.mount: Deactivated successfully. Apr 21 10:20:42.669751 containerd[1573]: time="2026-04-21T10:20:42.669248073Z" level=info msg="TearDown network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" successfully" Apr 21 10:20:42.669751 containerd[1573]: time="2026-04-21T10:20:42.669272210Z" level=info msg="StopPodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" returns successfully" Apr 21 10:20:42.672254 systemd[1]: run-netns-cni\x2d197cd777\x2da570\x2d5fb4\x2d545a\x2df8717e7da980.mount: Deactivated successfully. Apr 21 10:20:42.676068 containerd[1573]: time="2026-04-21T10:20:42.675319649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-ktxzm,Uid:a3033d5e-8f86-4407-9e8b-329c5d9f5e56,Namespace:calico-system,Attempt:1,}" Apr 21 10:20:42.676068 containerd[1573]: time="2026-04-21T10:20:42.675557644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtmzf,Uid:bf7ddc57-ba5f-4002-92d4-b664fca67867,Namespace:kube-system,Attempt:1,}" Apr 21 10:20:42.844218 systemd-networkd[1260]: cali9a4eda6a65f: Link UP Apr 21 10:20:42.844885 systemd-networkd[1260]: cali9a4eda6a65f: Gained carrier Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.717 [INFO][5313] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0 calico-kube-controllers-6884ccd5b8- calico-system bf5f2a04-0539-42fa-a71e-30dc3c2207a4 1144 0 2026-04-21 10:20:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6884ccd5b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6884ccd5b8-w5mwb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9a4eda6a65f [] [] }} ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.718 [INFO][5313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.753 [INFO][5350] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" HandleID="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.761 [INFO][5350] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" HandleID="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6884ccd5b8-w5mwb", "timestamp":"2026-04-21 10:20:42.753475752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ecf20)} Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.761 [INFO][5350] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.761 [INFO][5350] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.761 [INFO][5350] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.764 [INFO][5350] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.773 [INFO][5350] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.781 [INFO][5350] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.788 [INFO][5350] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.791 [INFO][5350] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.791 [INFO][5350] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.794 [INFO][5350] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924 Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.807 [INFO][5350] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.828 [INFO][5350] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.828 [INFO][5350] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" host="localhost" Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.828 [INFO][5350] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:42.863973 containerd[1573]: 2026-04-21 10:20:42.829 [INFO][5350] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" HandleID="k8s-pod-network.7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.865805 containerd[1573]: 2026-04-21 10:20:42.833 [INFO][5313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0", GenerateName:"calico-kube-controllers-6884ccd5b8-", Namespace:"calico-system", SelfLink:"", UID:"bf5f2a04-0539-42fa-a71e-30dc3c2207a4", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6884ccd5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6884ccd5b8-w5mwb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a4eda6a65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:42.865805 containerd[1573]: 2026-04-21 10:20:42.833 [INFO][5313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.865805 containerd[1573]: 2026-04-21 10:20:42.833 [INFO][5313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a4eda6a65f ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.865805 containerd[1573]: 2026-04-21 10:20:42.845 [INFO][5313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.865805 containerd[1573]: 2026-04-21 10:20:42.845 [INFO][5313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0", GenerateName:"calico-kube-controllers-6884ccd5b8-", Namespace:"calico-system", SelfLink:"", UID:"bf5f2a04-0539-42fa-a71e-30dc3c2207a4", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6884ccd5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924", Pod:"calico-kube-controllers-6884ccd5b8-w5mwb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a4eda6a65f", MAC:"b2:da:85:30:bd:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:42.865805 containerd[1573]: 2026-04-21 10:20:42.860 [INFO][5313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924" Namespace="calico-system" Pod="calico-kube-controllers-6884ccd5b8-w5mwb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:42.920701 containerd[1573]: time="2026-04-21T10:20:42.920512999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:42.920701 containerd[1573]: time="2026-04-21T10:20:42.920608189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:42.920701 containerd[1573]: time="2026-04-21T10:20:42.920640177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:42.921716 containerd[1573]: time="2026-04-21T10:20:42.921641001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:42.950723 systemd-networkd[1260]: calieee2e66b294: Link UP Apr 21 10:20:42.951603 systemd-networkd[1260]: calieee2e66b294: Gained carrier Apr 21 10:20:42.953448 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.765 [INFO][5338] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0 coredns-674b8bbfcf- kube-system bf7ddc57-ba5f-4002-92d4-b664fca67867 1145 0 2026-04-21 10:19:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-vtmzf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieee2e66b294 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.765 [INFO][5338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.809 [INFO][5365] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" HandleID="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.836 [INFO][5365] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" HandleID="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005122b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-vtmzf", "timestamp":"2026-04-21 10:20:42.809987902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a3600)} Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.836 [INFO][5365] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.836 [INFO][5365] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.836 [INFO][5365] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.870 [INFO][5365] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.899 [INFO][5365] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.914 [INFO][5365] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.920 [INFO][5365] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.928 [INFO][5365] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.928 [INFO][5365] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.930 [INFO][5365] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.934 [INFO][5365] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.946 [INFO][5365] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.946 [INFO][5365] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" host="localhost" Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.946 [INFO][5365] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:42.969984 containerd[1573]: 2026-04-21 10:20:42.946 [INFO][5365] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" HandleID="k8s-pod-network.0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.970697 containerd[1573]: 2026-04-21 10:20:42.948 [INFO][5338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf7ddc57-ba5f-4002-92d4-b664fca67867", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-vtmzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieee2e66b294", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:42.970697 containerd[1573]: 2026-04-21 10:20:42.948 [INFO][5338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.970697 containerd[1573]: 2026-04-21 10:20:42.948 [INFO][5338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieee2e66b294 ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.970697 containerd[1573]: 2026-04-21 10:20:42.952 [INFO][5338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:42.970697 containerd[1573]: 2026-04-21 10:20:42.952 [INFO][5338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf7ddc57-ba5f-4002-92d4-b664fca67867", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf", Pod:"coredns-674b8bbfcf-vtmzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieee2e66b294", MAC:"4e:a8:fd:b6:3f:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:42.970697 containerd[1573]: 2026-04-21 10:20:42.962 [INFO][5338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtmzf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:43.012848 containerd[1573]: time="2026-04-21T10:20:43.012704533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:43.012848 containerd[1573]: time="2026-04-21T10:20:43.012803634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:43.012848 containerd[1573]: time="2026-04-21T10:20:43.012842422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:43.013661 containerd[1573]: time="2026-04-21T10:20:43.013055183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:43.017130 containerd[1573]: time="2026-04-21T10:20:43.017034685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6884ccd5b8-w5mwb,Uid:bf5f2a04-0539-42fa-a71e-30dc3c2207a4,Namespace:calico-system,Attempt:1,} returns sandbox id \"7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924\"" Apr 21 10:20:43.060321 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:43.065020 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:53574.service - OpenSSH per-connection server daemon (10.0.0.1:53574). Apr 21 10:20:43.076137 systemd-networkd[1260]: cali983568ad23f: Gained IPv6LL Apr 21 10:20:43.091063 systemd-networkd[1260]: caliabb39e9e889: Link UP Apr 21 10:20:43.091197 systemd-networkd[1260]: caliabb39e9e889: Gained carrier Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.765 [INFO][5326] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0 calico-apiserver-57b589b74f- calico-system a3033d5e-8f86-4407-9e8b-329c5d9f5e56 1146 0 2026-04-21 10:20:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b589b74f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57b589b74f-ktxzm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliabb39e9e889 [] [] }} ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.765 [INFO][5326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.843 [INFO][5372] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" HandleID="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.862 [INFO][5372] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" HandleID="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdb70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-57b589b74f-ktxzm", "timestamp":"2026-04-21 10:20:42.843178915 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038e000)} Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.862 [INFO][5372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.946 [INFO][5372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.946 [INFO][5372] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:42.965 [INFO][5372] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.010 [INFO][5372] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.024 [INFO][5372] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.031 [INFO][5372] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.042 [INFO][5372] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.042 [INFO][5372] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.050 [INFO][5372] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727 Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.058 [INFO][5372] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.078 [INFO][5372] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.079 [INFO][5372] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" host="localhost" Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.079 [INFO][5372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:43.107619 containerd[1573]: 2026-04-21 10:20:43.079 [INFO][5372] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" HandleID="k8s-pod-network.def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.108659 containerd[1573]: 2026-04-21 10:20:43.083 [INFO][5326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"a3033d5e-8f86-4407-9e8b-329c5d9f5e56", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57b589b74f-ktxzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliabb39e9e889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:43.108659 containerd[1573]: 2026-04-21 10:20:43.083 [INFO][5326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.108659 containerd[1573]: 2026-04-21 10:20:43.083 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabb39e9e889 ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.108659 containerd[1573]: 2026-04-21 10:20:43.085 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.108659 containerd[1573]: 2026-04-21 10:20:43.085 [INFO][5326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"a3033d5e-8f86-4407-9e8b-329c5d9f5e56", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727", Pod:"calico-apiserver-57b589b74f-ktxzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliabb39e9e889", MAC:"56:64:59:a4:49:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:43.108659 containerd[1573]: 2026-04-21 10:20:43.102 [INFO][5326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727" Namespace="calico-system" Pod="calico-apiserver-57b589b74f-ktxzm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:43.118446 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 53574 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:43.122437 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:43.124048 containerd[1573]: time="2026-04-21T10:20:43.123986250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtmzf,Uid:bf7ddc57-ba5f-4002-92d4-b664fca67867,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf\"" Apr 21 10:20:43.125015 kubelet[2671]: E0421 10:20:43.124828 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:43.134257 containerd[1573]: time="2026-04-21T10:20:43.134204642Z" level=info msg="CreateContainer within sandbox \"0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:20:43.134370 systemd-logind[1561]: New session 11 of user core. Apr 21 10:20:43.143464 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:20:43.158666 containerd[1573]: time="2026-04-21T10:20:43.158134959Z" level=info msg="CreateContainer within sandbox \"0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc65de4f3eba229e54492bc879fb8677c02df07b5a16e45150a04bf8f38ce0ef\"" Apr 21 10:20:43.158964 containerd[1573]: time="2026-04-21T10:20:43.158938684Z" level=info msg="StartContainer for \"fc65de4f3eba229e54492bc879fb8677c02df07b5a16e45150a04bf8f38ce0ef\"" Apr 21 10:20:43.174558 containerd[1573]: time="2026-04-21T10:20:43.174466525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:20:43.174558 containerd[1573]: time="2026-04-21T10:20:43.174516770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:20:43.174558 containerd[1573]: time="2026-04-21T10:20:43.174526396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:43.175655 containerd[1573]: time="2026-04-21T10:20:43.175045062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:20:43.202991 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:20:43.219182 containerd[1573]: time="2026-04-21T10:20:43.219087271Z" level=info msg="StartContainer for \"fc65de4f3eba229e54492bc879fb8677c02df07b5a16e45150a04bf8f38ce0ef\" returns successfully" Apr 21 10:20:43.242103 containerd[1573]: time="2026-04-21T10:20:43.242042595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b589b74f-ktxzm,Uid:a3033d5e-8f86-4407-9e8b-329c5d9f5e56,Namespace:calico-system,Attempt:1,} returns sandbox id \"def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727\"" Apr 21 10:20:43.344285 sshd[5499]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:43.348732 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:53574.service: Deactivated successfully. Apr 21 10:20:43.350724 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:20:43.350787 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:20:43.352360 systemd-logind[1561]: Removed session 11. Apr 21 10:20:43.715035 kubelet[2671]: E0421 10:20:43.714991 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:43.754036 kubelet[2671]: I0421 10:20:43.752311 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vtmzf" podStartSLOduration=45.752292728 podStartE2EDuration="45.752292728s" podCreationTimestamp="2026-04-21 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:20:43.732127546 +0000 UTC m=+51.452582477" watchObservedRunningTime="2026-04-21 10:20:43.752292728 +0000 UTC m=+51.472747590" Apr 21 10:20:43.866658 containerd[1573]: time="2026-04-21T10:20:43.866608254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:43.867330 containerd[1573]: time="2026-04-21T10:20:43.867258626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:20:43.868372 containerd[1573]: time="2026-04-21T10:20:43.868325947Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:43.870938 containerd[1573]: time="2026-04-21T10:20:43.870860591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:43.871464 containerd[1573]: time="2026-04-21T10:20:43.871421770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.115522799s" Apr 21 10:20:43.871464 containerd[1573]: time="2026-04-21T10:20:43.871452950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:20:43.872421 containerd[1573]: time="2026-04-21T10:20:43.872371369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:20:43.876473 containerd[1573]: time="2026-04-21T10:20:43.876412900Z" level=info msg="CreateContainer within sandbox \"7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:20:43.962704 containerd[1573]: time="2026-04-21T10:20:43.962606557Z" level=info msg="CreateContainer within sandbox \"7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f8e5b864631c1b0b8749ac4178078ad583267f22348ceeb45db7cb55091ea35d\"" Apr 21 10:20:43.964470 containerd[1573]: time="2026-04-21T10:20:43.963413191Z" level=info msg="StartContainer for \"f8e5b864631c1b0b8749ac4178078ad583267f22348ceeb45db7cb55091ea35d\"" Apr 21 10:20:44.029834 containerd[1573]: time="2026-04-21T10:20:44.029702263Z" level=info msg="StartContainer for \"f8e5b864631c1b0b8749ac4178078ad583267f22348ceeb45db7cb55091ea35d\" returns successfully" Apr 21 10:20:44.609350 systemd-networkd[1260]: caliabb39e9e889: Gained IPv6LL Apr 21 10:20:44.673153 systemd-networkd[1260]: calieee2e66b294: Gained IPv6LL Apr 21 10:20:44.719174 kubelet[2671]: E0421 10:20:44.719085 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:44.746357 kubelet[2671]: I0421 10:20:44.746043 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-l8wx5" podStartSLOduration=35.629325683 podStartE2EDuration="38.746022527s" podCreationTimestamp="2026-04-21 10:20:06 +0000 UTC" firstStartedPulling="2026-04-21 10:20:40.755553578 +0000 UTC m=+48.476008429" lastFinishedPulling="2026-04-21 10:20:43.872250422 +0000 UTC m=+51.592705273" observedRunningTime="2026-04-21 10:20:44.745279173 +0000 UTC m=+52.465734030" watchObservedRunningTime="2026-04-21 10:20:44.746022527 +0000 UTC m=+52.466477387" Apr 21 10:20:44.801297 systemd-networkd[1260]: cali9a4eda6a65f: Gained IPv6LL Apr 21 10:20:45.724301 kubelet[2671]: E0421 10:20:45.724192 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:20:48.002849 containerd[1573]: time="2026-04-21T10:20:48.002661361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:48.003543 containerd[1573]: time="2026-04-21T10:20:48.003426905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:20:48.004458 containerd[1573]: time="2026-04-21T10:20:48.004411636Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:48.007750 containerd[1573]: time="2026-04-21T10:20:48.007630708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:48.008737 containerd[1573]: time="2026-04-21T10:20:48.008687416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.136277742s" Apr 21 10:20:48.008835 containerd[1573]: time="2026-04-21T10:20:48.008741142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:20:48.010397 containerd[1573]: time="2026-04-21T10:20:48.010352279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:20:48.013968 containerd[1573]: time="2026-04-21T10:20:48.013923150Z" level=info msg="CreateContainer within sandbox \"f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:20:48.026843 containerd[1573]: time="2026-04-21T10:20:48.026756650Z" level=info msg="CreateContainer within sandbox \"f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"31f744d98c8f0707b8a87a36b3e51fa2e4b7fdfcda1073794b7c80223eb57993\"" Apr 21 10:20:48.027456 containerd[1573]: time="2026-04-21T10:20:48.027428023Z" level=info msg="StartContainer for \"31f744d98c8f0707b8a87a36b3e51fa2e4b7fdfcda1073794b7c80223eb57993\"" Apr 21 10:20:48.122042 containerd[1573]: time="2026-04-21T10:20:48.121990270Z" level=info msg="StartContainer for \"31f744d98c8f0707b8a87a36b3e51fa2e4b7fdfcda1073794b7c80223eb57993\" returns successfully" Apr 21 10:20:48.351362 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:53578.service - OpenSSH per-connection server daemon (10.0.0.1:53578). Apr 21 10:20:48.403603 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 53578 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:48.405059 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:48.409950 systemd-logind[1561]: New session 12 of user core. Apr 21 10:20:48.415243 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:20:48.653694 sshd[5799]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:48.657018 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:53578.service: Deactivated successfully. Apr 21 10:20:48.659473 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:20:48.659727 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:20:48.661173 systemd-logind[1561]: Removed session 12. Apr 21 10:20:49.821608 kubelet[2671]: I0421 10:20:49.821427 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57b589b74f-9ncj6" podStartSLOduration=37.487257844 podStartE2EDuration="43.821393184s" podCreationTimestamp="2026-04-21 10:20:06 +0000 UTC" firstStartedPulling="2026-04-21 10:20:41.67611781 +0000 UTC m=+49.396572661" lastFinishedPulling="2026-04-21 10:20:48.010253147 +0000 UTC m=+55.730708001" observedRunningTime="2026-04-21 10:20:48.748237625 +0000 UTC m=+56.468692514" watchObservedRunningTime="2026-04-21 10:20:49.821393184 +0000 UTC m=+57.541848035" Apr 21 10:20:51.494365 containerd[1573]: time="2026-04-21T10:20:51.494289816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.494992 containerd[1573]: time="2026-04-21T10:20:51.494949353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:20:51.496091 containerd[1573]: time="2026-04-21T10:20:51.496059225Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.498232 containerd[1573]: time="2026-04-21T10:20:51.498195688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.498716 containerd[1573]: time="2026-04-21T10:20:51.498683221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.488284888s" Apr 21 10:20:51.498776 containerd[1573]: time="2026-04-21T10:20:51.498721542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:20:51.500252 containerd[1573]: time="2026-04-21T10:20:51.500186104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:20:51.528332 containerd[1573]: time="2026-04-21T10:20:51.528266102Z" level=info msg="CreateContainer within sandbox \"7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:20:51.558073 containerd[1573]: time="2026-04-21T10:20:51.558030098Z" level=info msg="CreateContainer within sandbox \"7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"de527dc668c27084989d1f785af89480625875ff017e488fa4b4803b1c18e9b2\"" Apr 21 10:20:51.558661 containerd[1573]: time="2026-04-21T10:20:51.558635213Z" level=info msg="StartContainer for \"de527dc668c27084989d1f785af89480625875ff017e488fa4b4803b1c18e9b2\"" Apr 21 10:20:51.624782 containerd[1573]: time="2026-04-21T10:20:51.624739843Z" level=info msg="StartContainer for \"de527dc668c27084989d1f785af89480625875ff017e488fa4b4803b1c18e9b2\" returns successfully" Apr 21 10:20:51.757320 kubelet[2671]: I0421 10:20:51.756927 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6884ccd5b8-w5mwb" podStartSLOduration=36.276924965 podStartE2EDuration="44.756877096s" podCreationTimestamp="2026-04-21 10:20:07 +0000 UTC" firstStartedPulling="2026-04-21 10:20:43.01973739 +0000 UTC m=+50.740192250" lastFinishedPulling="2026-04-21 10:20:51.499689528 +0000 UTC m=+59.220144381" observedRunningTime="2026-04-21 10:20:51.756231037 +0000 UTC m=+59.476685903" watchObservedRunningTime="2026-04-21 10:20:51.756877096 +0000 UTC m=+59.477331955" Apr 21 10:20:51.941024 containerd[1573]: time="2026-04-21T10:20:51.940951905Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.942341 containerd[1573]: time="2026-04-21T10:20:51.942261528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:20:51.964479 containerd[1573]: time="2026-04-21T10:20:51.956956289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 456.736181ms" Apr 21 10:20:51.964479 containerd[1573]: time="2026-04-21T10:20:51.964485046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:20:51.970934 containerd[1573]: time="2026-04-21T10:20:51.970860212Z" level=info msg="CreateContainer within sandbox \"def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:20:51.993514 containerd[1573]: time="2026-04-21T10:20:51.993385345Z" level=info msg="CreateContainer within sandbox \"def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0a9be8add6d1f95ecfcd4e5221896635b099d2b45c28300bb259331840375691\"" Apr 21 10:20:51.994261 containerd[1573]: time="2026-04-21T10:20:51.994226637Z" level=info msg="StartContainer for \"0a9be8add6d1f95ecfcd4e5221896635b099d2b45c28300bb259331840375691\"" Apr 21 10:20:52.073208 containerd[1573]: time="2026-04-21T10:20:52.073112748Z" level=info msg="StartContainer for \"0a9be8add6d1f95ecfcd4e5221896635b099d2b45c28300bb259331840375691\" returns successfully" Apr 21 10:20:52.449414 containerd[1573]: time="2026-04-21T10:20:52.449036519Z" level=info msg="StopPodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\"" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.506 [WARNING][5947] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf7ddc57-ba5f-4002-92d4-b664fca67867", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf", Pod:"coredns-674b8bbfcf-vtmzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieee2e66b294", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.506 [INFO][5947] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.506 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" iface="eth0" netns="" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.506 [INFO][5947] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.506 [INFO][5947] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.564 [INFO][5955] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.566 [INFO][5955] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.566 [INFO][5955] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.578 [WARNING][5955] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.578 [INFO][5955] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.581 [INFO][5955] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:52.585411 containerd[1573]: 2026-04-21 10:20:52.582 [INFO][5947] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.586138 containerd[1573]: time="2026-04-21T10:20:52.585453973Z" level=info msg="TearDown network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" successfully" Apr 21 10:20:52.586138 containerd[1573]: time="2026-04-21T10:20:52.585477289Z" level=info msg="StopPodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" returns successfully" Apr 21 10:20:52.593749 containerd[1573]: time="2026-04-21T10:20:52.593536530Z" level=info msg="RemovePodSandbox for \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\"" Apr 21 10:20:52.595539 containerd[1573]: time="2026-04-21T10:20:52.595484926Z" level=info msg="Forcibly stopping sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\"" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.646 [WARNING][5973] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf7ddc57-ba5f-4002-92d4-b664fca67867", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a510998076cd57a9ccc88b6acb84654748ca3400d30605083ac5da1057029cf", Pod:"coredns-674b8bbfcf-vtmzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieee2e66b294", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.648 [INFO][5973] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.648 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" iface="eth0" netns="" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.648 [INFO][5973] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.648 [INFO][5973] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.684 [INFO][5982] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.684 [INFO][5982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.684 [INFO][5982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.691 [WARNING][5982] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.691 [INFO][5982] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" HandleID="k8s-pod-network.8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Workload="localhost-k8s-coredns--674b8bbfcf--vtmzf-eth0" Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.693 [INFO][5982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:52.698998 containerd[1573]: 2026-04-21 10:20:52.695 [INFO][5973] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06" Apr 21 10:20:52.698998 containerd[1573]: time="2026-04-21T10:20:52.698788492Z" level=info msg="TearDown network for sandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" successfully" Apr 21 10:20:52.724558 containerd[1573]: time="2026-04-21T10:20:52.724388802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:52.724558 containerd[1573]: time="2026-04-21T10:20:52.724518424Z" level=info msg="RemovePodSandbox \"8d76676099e244763d71e9be1fa80ffb50bb6d468491d6182c4136620707ef06\" returns successfully" Apr 21 10:20:52.736442 containerd[1573]: time="2026-04-21T10:20:52.736390103Z" level=info msg="StopPodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\"" Apr 21 10:20:52.764475 kubelet[2671]: I0421 10:20:52.764393 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-57b589b74f-ktxzm" podStartSLOduration=38.042456568 podStartE2EDuration="46.764376135s" podCreationTimestamp="2026-04-21 10:20:06 +0000 UTC" firstStartedPulling="2026-04-21 10:20:43.243554387 +0000 UTC m=+50.964009237" lastFinishedPulling="2026-04-21 10:20:51.965473954 +0000 UTC m=+59.685928804" observedRunningTime="2026-04-21 10:20:52.764194774 +0000 UTC m=+60.484649626" watchObservedRunningTime="2026-04-21 10:20:52.764376135 +0000 UTC m=+60.484830995" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.786 [WARNING][6001] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"301078df-91a6-4fff-b66d-08ba5d84899e", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a", Pod:"calico-apiserver-57b589b74f-9ncj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali983568ad23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.786 [INFO][6001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.786 [INFO][6001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" iface="eth0" netns="" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.786 [INFO][6001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.786 [INFO][6001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.813 [INFO][6011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.813 [INFO][6011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.813 [INFO][6011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.818 [WARNING][6011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.818 [INFO][6011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.821 [INFO][6011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:52.825744 containerd[1573]: 2026-04-21 10:20:52.823 [INFO][6001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.825744 containerd[1573]: time="2026-04-21T10:20:52.825616796Z" level=info msg="TearDown network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" successfully" Apr 21 10:20:52.825744 containerd[1573]: time="2026-04-21T10:20:52.825638240Z" level=info msg="StopPodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" returns successfully" Apr 21 10:20:52.826381 containerd[1573]: time="2026-04-21T10:20:52.826349595Z" level=info msg="RemovePodSandbox for \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\"" Apr 21 10:20:52.826569 containerd[1573]: time="2026-04-21T10:20:52.826557068Z" level=info msg="Forcibly stopping sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\"" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.867 [WARNING][6028] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"301078df-91a6-4fff-b66d-08ba5d84899e", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4b8f1b7245ec89e5730a3f786dd168f772f83afc8c3f5c30aea8193ea74108a", Pod:"calico-apiserver-57b589b74f-9ncj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali983568ad23f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.868 [INFO][6028] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.868 [INFO][6028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" iface="eth0" netns="" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.868 [INFO][6028] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.868 [INFO][6028] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.899 [INFO][6036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.899 [INFO][6036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.899 [INFO][6036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.906 [WARNING][6036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.906 [INFO][6036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" HandleID="k8s-pod-network.228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Workload="localhost-k8s-calico--apiserver--57b589b74f--9ncj6-eth0" Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.912 [INFO][6036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:52.916067 containerd[1573]: 2026-04-21 10:20:52.914 [INFO][6028] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c" Apr 21 10:20:52.916067 containerd[1573]: time="2026-04-21T10:20:52.916072187Z" level=info msg="TearDown network for sandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" successfully" Apr 21 10:20:52.920239 containerd[1573]: time="2026-04-21T10:20:52.920157981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:52.920239 containerd[1573]: time="2026-04-21T10:20:52.920226541Z" level=info msg="RemovePodSandbox \"228a5252a920517a220064ba37d19fb04c8193b8ff1d31d222411a28361eb05c\" returns successfully" Apr 21 10:20:52.920932 containerd[1573]: time="2026-04-21T10:20:52.920861097Z" level=info msg="StopPodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\"" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.953 [WARNING][6054] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"a3033d5e-8f86-4407-9e8b-329c5d9f5e56", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727", Pod:"calico-apiserver-57b589b74f-ktxzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliabb39e9e889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.953 [INFO][6054] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.953 [INFO][6054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" iface="eth0" netns="" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.953 [INFO][6054] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.953 [INFO][6054] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.986 [INFO][6062] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.986 [INFO][6062] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.986 [INFO][6062] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.994 [WARNING][6062] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:52.994 [INFO][6062] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:53.000 [INFO][6062] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.003834 containerd[1573]: 2026-04-21 10:20:53.002 [INFO][6054] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.003834 containerd[1573]: time="2026-04-21T10:20:53.003720075Z" level=info msg="TearDown network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" successfully" Apr 21 10:20:53.003834 containerd[1573]: time="2026-04-21T10:20:53.003743422Z" level=info msg="StopPodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" returns successfully" Apr 21 10:20:53.004471 containerd[1573]: time="2026-04-21T10:20:53.004386072Z" level=info msg="RemovePodSandbox for \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\"" Apr 21 10:20:53.004471 containerd[1573]: time="2026-04-21T10:20:53.004410790Z" level=info msg="Forcibly stopping sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\"" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.054 [WARNING][6079] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0", GenerateName:"calico-apiserver-57b589b74f-", Namespace:"calico-system", SelfLink:"", UID:"a3033d5e-8f86-4407-9e8b-329c5d9f5e56", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b589b74f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"def69826d46faaee766c9b6c191e26013ab9f2ee57e667b579e1a25fac0e4727", Pod:"calico-apiserver-57b589b74f-ktxzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliabb39e9e889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.054 [INFO][6079] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.054 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" iface="eth0" netns="" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.054 [INFO][6079] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.054 [INFO][6079] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.108 [INFO][6088] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.108 [INFO][6088] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.108 [INFO][6088] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.117 [WARNING][6088] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.117 [INFO][6088] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" HandleID="k8s-pod-network.8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Workload="localhost-k8s-calico--apiserver--57b589b74f--ktxzm-eth0" Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.118 [INFO][6088] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.122188 containerd[1573]: 2026-04-21 10:20:53.120 [INFO][6079] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9" Apr 21 10:20:53.123975 containerd[1573]: time="2026-04-21T10:20:53.122231669Z" level=info msg="TearDown network for sandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" successfully" Apr 21 10:20:53.128041 containerd[1573]: time="2026-04-21T10:20:53.127963933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:53.128105 containerd[1573]: time="2026-04-21T10:20:53.128054139Z" level=info msg="RemovePodSandbox \"8ac2b69a8789e41d895ce34dc0e4310e0b4654d2013634533a82c662590bade9\" returns successfully" Apr 21 10:20:53.128647 containerd[1573]: time="2026-04-21T10:20:53.128625450Z" level=info msg="StopPodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\"" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.166 [WARNING][6106] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0", GenerateName:"calico-kube-controllers-6884ccd5b8-", Namespace:"calico-system", SelfLink:"", UID:"bf5f2a04-0539-42fa-a71e-30dc3c2207a4", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6884ccd5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924", Pod:"calico-kube-controllers-6884ccd5b8-w5mwb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a4eda6a65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.167 [INFO][6106] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.167 [INFO][6106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" iface="eth0" netns="" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.167 [INFO][6106] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.167 [INFO][6106] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.196 [INFO][6114] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.197 [INFO][6114] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.197 [INFO][6114] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.208 [WARNING][6114] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.208 [INFO][6114] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.210 [INFO][6114] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.217534 containerd[1573]: 2026-04-21 10:20:53.213 [INFO][6106] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.220685 containerd[1573]: time="2026-04-21T10:20:53.217536997Z" level=info msg="TearDown network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" successfully" Apr 21 10:20:53.220685 containerd[1573]: time="2026-04-21T10:20:53.217558076Z" level=info msg="StopPodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" returns successfully" Apr 21 10:20:53.220685 containerd[1573]: time="2026-04-21T10:20:53.218298930Z" level=info msg="RemovePodSandbox for \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\"" Apr 21 10:20:53.220685 containerd[1573]: time="2026-04-21T10:20:53.218360185Z" level=info msg="Forcibly stopping sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\"" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.263 [WARNING][6139] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0", GenerateName:"calico-kube-controllers-6884ccd5b8-", Namespace:"calico-system", SelfLink:"", UID:"bf5f2a04-0539-42fa-a71e-30dc3c2207a4", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6884ccd5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7006ffcded612735270d01071886d7869e0311b94e7c69f980956cb9f1bd5924", Pod:"calico-kube-controllers-6884ccd5b8-w5mwb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9a4eda6a65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.263 [INFO][6139] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.263 [INFO][6139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" iface="eth0" netns="" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.263 [INFO][6139] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.263 [INFO][6139] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.286 [INFO][6147] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.286 [INFO][6147] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.286 [INFO][6147] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.294 [WARNING][6147] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.294 [INFO][6147] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" HandleID="k8s-pod-network.403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Workload="localhost-k8s-calico--kube--controllers--6884ccd5b8--w5mwb-eth0" Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.296 [INFO][6147] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.299333 containerd[1573]: 2026-04-21 10:20:53.297 [INFO][6139] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd" Apr 21 10:20:53.299671 containerd[1573]: time="2026-04-21T10:20:53.299363583Z" level=info msg="TearDown network for sandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" successfully" Apr 21 10:20:53.303641 containerd[1573]: time="2026-04-21T10:20:53.303566154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:53.303697 containerd[1573]: time="2026-04-21T10:20:53.303650572Z" level=info msg="RemovePodSandbox \"403d6cec40a9bd2f517f4f66e5cd97a9873417a4482e7dc1e6cceffe6ed2d2fd\" returns successfully" Apr 21 10:20:53.304220 containerd[1573]: time="2026-04-21T10:20:53.304183768Z" level=info msg="StopPodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\"" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.341 [WARNING][6165] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8", Pod:"coredns-674b8bbfcf-qrn9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c343dc367", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.342 [INFO][6165] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.342 [INFO][6165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" iface="eth0" netns="" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.342 [INFO][6165] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.342 [INFO][6165] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.368 [INFO][6174] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.368 [INFO][6174] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.369 [INFO][6174] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.380 [WARNING][6174] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.380 [INFO][6174] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.382 [INFO][6174] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.385804 containerd[1573]: 2026-04-21 10:20:53.384 [INFO][6165] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.386236 containerd[1573]: time="2026-04-21T10:20:53.385816381Z" level=info msg="TearDown network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" successfully" Apr 21 10:20:53.386236 containerd[1573]: time="2026-04-21T10:20:53.385852679Z" level=info msg="StopPodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" returns successfully" Apr 21 10:20:53.387965 containerd[1573]: time="2026-04-21T10:20:53.386436238Z" level=info msg="RemovePodSandbox for \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\"" Apr 21 10:20:53.387965 containerd[1573]: time="2026-04-21T10:20:53.386473520Z" level=info msg="Forcibly stopping sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\"" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.430 [WARNING][6192] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"de1ad74a-7d22-4eba-8fc5-d7a4b23d1a87", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 19, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3ceb5e6b2753d47536cd2fbde17a608970c0ebdab2e41782b2dca71eef9b2f8", Pod:"coredns-674b8bbfcf-qrn9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95c343dc367", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.430 [INFO][6192] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.430 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" iface="eth0" netns="" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.430 [INFO][6192] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.430 [INFO][6192] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.458 [INFO][6201] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.459 [INFO][6201] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.460 [INFO][6201] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.486 [WARNING][6201] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.486 [INFO][6201] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" HandleID="k8s-pod-network.b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Workload="localhost-k8s-coredns--674b8bbfcf--qrn9t-eth0" Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.505 [INFO][6201] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.531558 containerd[1573]: 2026-04-21 10:20:53.522 [INFO][6192] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59" Apr 21 10:20:53.531558 containerd[1573]: time="2026-04-21T10:20:53.531546661Z" level=info msg="TearDown network for sandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" successfully" Apr 21 10:20:53.576550 containerd[1573]: time="2026-04-21T10:20:53.576414381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:53.576550 containerd[1573]: time="2026-04-21T10:20:53.576491068Z" level=info msg="RemovePodSandbox \"b38933138240c8d000c500abd9f2fb7c17251e0aeed1464267480f4ae4526c59\" returns successfully" Apr 21 10:20:53.577320 containerd[1573]: time="2026-04-21T10:20:53.577301167Z" level=info msg="StopPodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\"" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.609 [WARNING][6222] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--l8wx5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"520d03aa-bd03-4c5c-9f6e-49911f08321d", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81", Pod:"goldmane-5b85766d88-l8wx5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0357d9eff29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.610 [INFO][6222] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.610 [INFO][6222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" iface="eth0" netns="" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.610 [INFO][6222] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.610 [INFO][6222] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.632 [INFO][6230] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.632 [INFO][6230] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.632 [INFO][6230] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.640 [WARNING][6230] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.640 [INFO][6230] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.642 [INFO][6230] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.645854 containerd[1573]: 2026-04-21 10:20:53.644 [INFO][6222] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.645854 containerd[1573]: time="2026-04-21T10:20:53.645710066Z" level=info msg="TearDown network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" successfully" Apr 21 10:20:53.645854 containerd[1573]: time="2026-04-21T10:20:53.645729210Z" level=info msg="StopPodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" returns successfully" Apr 21 10:20:53.646611 containerd[1573]: time="2026-04-21T10:20:53.646263118Z" level=info msg="RemovePodSandbox for \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\"" Apr 21 10:20:53.646611 containerd[1573]: time="2026-04-21T10:20:53.646289472Z" level=info msg="Forcibly stopping sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\"" Apr 21 10:20:53.663217 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:37802.service - OpenSSH per-connection server daemon (10.0.0.1:37802). Apr 21 10:20:53.731056 sshd[6252]: Accepted publickey for core from 10.0.0.1 port 37802 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:53.732159 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.688 [WARNING][6247] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--l8wx5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"520d03aa-bd03-4c5c-9f6e-49911f08321d", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7be0ff2f5ba4a1fb456d164e23106f2af1fde7bbc30cf4a281dbf194df54be81", Pod:"goldmane-5b85766d88-l8wx5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0357d9eff29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.688 [INFO][6247] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.688 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" iface="eth0" netns="" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.688 [INFO][6247] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.688 [INFO][6247] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.714 [INFO][6257] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.714 [INFO][6257] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.714 [INFO][6257] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.725 [WARNING][6257] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.725 [INFO][6257] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" HandleID="k8s-pod-network.f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Workload="localhost-k8s-goldmane--5b85766d88--l8wx5-eth0" Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.728 [INFO][6257] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.732820 containerd[1573]: 2026-04-21 10:20:53.731 [INFO][6247] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b" Apr 21 10:20:53.733170 containerd[1573]: time="2026-04-21T10:20:53.732876697Z" level=info msg="TearDown network for sandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" successfully" Apr 21 10:20:53.736051 containerd[1573]: time="2026-04-21T10:20:53.735996818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:53.736104 containerd[1573]: time="2026-04-21T10:20:53.736065555Z" level=info msg="RemovePodSandbox \"f11a01287df1a3c2b85db4651cb3add12618e32f8e9efb66276bd5097fe1e94b\" returns successfully" Apr 21 10:20:53.736884 containerd[1573]: time="2026-04-21T10:20:53.736603371Z" level=info msg="StopPodSandbox for \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\"" Apr 21 10:20:53.736661 systemd-logind[1561]: New session 13 of user core. Apr 21 10:20:53.741145 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.783 [WARNING][6276] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpxdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e073554a-e6ab-44ff-a032-f5d7862b4ec3", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4", Pod:"csi-node-driver-dpxdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6de2a3c5fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.783 [INFO][6276] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.783 [INFO][6276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" iface="eth0" netns="" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.783 [INFO][6276] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.783 [INFO][6276] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.808 [INFO][6288] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.808 [INFO][6288] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.808 [INFO][6288] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.821 [WARNING][6288] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.821 [INFO][6288] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.824 [INFO][6288] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.828429 containerd[1573]: 2026-04-21 10:20:53.825 [INFO][6276] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.828429 containerd[1573]: time="2026-04-21T10:20:53.828008709Z" level=info msg="TearDown network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\" successfully" Apr 21 10:20:53.828429 containerd[1573]: time="2026-04-21T10:20:53.828034566Z" level=info msg="StopPodSandbox for \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\" returns successfully" Apr 21 10:20:53.829770 containerd[1573]: time="2026-04-21T10:20:53.829116193Z" level=info msg="RemovePodSandbox for \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\"" Apr 21 10:20:53.829770 containerd[1573]: time="2026-04-21T10:20:53.829211590Z" level=info msg="Forcibly stopping sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\"" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.879 [WARNING][6314] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dpxdk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e073554a-e6ab-44ff-a032-f5d7862b4ec3", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 20, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58792aaddcd9b2ebe8cf865f42d8fd94a64802c54a2bedc808a9ac5f95e0c5c4", Pod:"csi-node-driver-dpxdk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6de2a3c5fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.879 [INFO][6314] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.879 [INFO][6314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" iface="eth0" netns="" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.879 [INFO][6314] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.879 [INFO][6314] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.934 [INFO][6323] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.935 [INFO][6323] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.935 [INFO][6323] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.948 [WARNING][6323] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.948 [INFO][6323] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" HandleID="k8s-pod-network.8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Workload="localhost-k8s-csi--node--driver--dpxdk-eth0" Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.949 [INFO][6323] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:53.954328 containerd[1573]: 2026-04-21 10:20:53.951 [INFO][6314] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6" Apr 21 10:20:53.955295 containerd[1573]: time="2026-04-21T10:20:53.954337221Z" level=info msg="TearDown network for sandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\" successfully" Apr 21 10:20:53.990855 containerd[1573]: time="2026-04-21T10:20:53.990742817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:53.990855 containerd[1573]: time="2026-04-21T10:20:53.990855687Z" level=info msg="RemovePodSandbox \"8990d12cb8cf9221e71c2fce5d0f3221d360ae4a009cb827724e0eb997c876d6\" returns successfully" Apr 21 10:20:53.991752 containerd[1573]: time="2026-04-21T10:20:53.991633200Z" level=info msg="StopPodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\"" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.053 [WARNING][6343] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" WorkloadEndpoint="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.053 [INFO][6343] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.054 [INFO][6343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" iface="eth0" netns="" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.054 [INFO][6343] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.054 [INFO][6343] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.137 [INFO][6352] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.139 [INFO][6352] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.139 [INFO][6352] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.148 [WARNING][6352] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.148 [INFO][6352] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.153 [INFO][6352] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:54.158028 containerd[1573]: 2026-04-21 10:20:54.156 [INFO][6343] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.158373 containerd[1573]: time="2026-04-21T10:20:54.157991567Z" level=info msg="TearDown network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" successfully" Apr 21 10:20:54.158373 containerd[1573]: time="2026-04-21T10:20:54.158048634Z" level=info msg="StopPodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" returns successfully" Apr 21 10:20:54.158809 containerd[1573]: time="2026-04-21T10:20:54.158757624Z" level=info msg="RemovePodSandbox for \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\"" Apr 21 10:20:54.158945 containerd[1573]: time="2026-04-21T10:20:54.158814381Z" level=info msg="Forcibly stopping sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\"" Apr 21 10:20:54.204243 sshd[6252]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:54.215555 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:37816.service - OpenSSH per-connection server daemon (10.0.0.1:37816). Apr 21 10:20:54.216000 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:37802.service: Deactivated successfully. Apr 21 10:20:54.223413 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:20:54.225197 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:20:54.227373 systemd-logind[1561]: Removed session 13. Apr 21 10:20:54.242937 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 37816 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:54.244624 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:54.250641 systemd-logind[1561]: New session 14 of user core. Apr 21 10:20:54.254117 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.249 [WARNING][6372] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" WorkloadEndpoint="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.251 [INFO][6372] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.251 [INFO][6372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" iface="eth0" netns="" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.251 [INFO][6372] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.251 [INFO][6372] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.274 [INFO][6386] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.275 [INFO][6386] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.275 [INFO][6386] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.285 [WARNING][6386] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.285 [INFO][6386] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" HandleID="k8s-pod-network.9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Workload="localhost-k8s-whisker--66c5849fb6--ghkc9-eth0" Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.287 [INFO][6386] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:20:54.290419 containerd[1573]: 2026-04-21 10:20:54.288 [INFO][6372] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4" Apr 21 10:20:54.290761 containerd[1573]: time="2026-04-21T10:20:54.290493605Z" level=info msg="TearDown network for sandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" successfully" Apr 21 10:20:54.295781 containerd[1573]: time="2026-04-21T10:20:54.295741141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:54.295953 containerd[1573]: time="2026-04-21T10:20:54.295811521Z" level=info msg="RemovePodSandbox \"9b7d36164bbb2cfc99fb6a8897f1b9733211050735bb6a628454145d4111a5f4\" returns successfully" Apr 21 10:20:54.422962 sshd[6379]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:54.425874 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:37816.service: Deactivated successfully. Apr 21 10:20:54.428685 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:20:54.434723 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:37818.service - OpenSSH per-connection server daemon (10.0.0.1:37818). Apr 21 10:20:54.435022 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:20:54.437606 systemd-logind[1561]: Removed session 14. Apr 21 10:20:54.472147 sshd[6406]: Accepted publickey for core from 10.0.0.1 port 37818 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:54.473633 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:54.479213 systemd-logind[1561]: New session 15 of user core. Apr 21 10:20:54.489385 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:20:54.617289 sshd[6406]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:54.620431 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:37818.service: Deactivated successfully. Apr 21 10:20:54.622273 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:20:54.622297 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:20:54.623514 systemd-logind[1561]: Removed session 15. Apr 21 10:20:59.635289 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:57264.service - OpenSSH per-connection server daemon (10.0.0.1:57264). Apr 21 10:20:59.665601 sshd[6454]: Accepted publickey for core from 10.0.0.1 port 57264 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:59.667164 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:59.673041 systemd-logind[1561]: New session 16 of user core. Apr 21 10:20:59.679572 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:20:59.822499 sshd[6454]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:59.826688 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:57264.service: Deactivated successfully. Apr 21 10:20:59.828456 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:20:59.828502 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:20:59.829651 systemd-logind[1561]: Removed session 16. Apr 21 10:21:04.837146 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:57280.service - OpenSSH per-connection server daemon (10.0.0.1:57280). Apr 21 10:21:04.876593 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 57280 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:04.882365 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:04.892590 systemd-logind[1561]: New session 17 of user core. Apr 21 10:21:04.931779 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:21:05.076707 sshd[6475]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:05.085486 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:57288.service - OpenSSH per-connection server daemon (10.0.0.1:57288). Apr 21 10:21:05.086326 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:57280.service: Deactivated successfully. Apr 21 10:21:05.089841 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:21:05.090033 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:21:05.091298 systemd-logind[1561]: Removed session 17. Apr 21 10:21:05.115941 sshd[6487]: Accepted publickey for core from 10.0.0.1 port 57288 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:05.117234 sshd[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:05.121032 systemd-logind[1561]: New session 18 of user core. Apr 21 10:21:05.130377 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:21:05.329450 sshd[6487]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:05.335120 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:57292.service - OpenSSH per-connection server daemon (10.0.0.1:57292). Apr 21 10:21:05.335836 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:57288.service: Deactivated successfully. Apr 21 10:21:05.338931 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:21:05.339022 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:21:05.340811 systemd-logind[1561]: Removed session 18. Apr 21 10:21:05.371295 sshd[6501]: Accepted publickey for core from 10.0.0.1 port 57292 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:05.372980 sshd[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:05.379274 systemd-logind[1561]: New session 19 of user core. Apr 21 10:21:05.392418 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:21:05.393385 kubelet[2671]: E0421 10:21:05.393271 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:06.117162 sshd[6501]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:06.126215 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:57308.service - OpenSSH per-connection server daemon (10.0.0.1:57308). Apr 21 10:21:06.126537 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:57292.service: Deactivated successfully. Apr 21 10:21:06.131271 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:21:06.132244 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:21:06.134473 systemd-logind[1561]: Removed session 19. Apr 21 10:21:06.176522 sshd[6549]: Accepted publickey for core from 10.0.0.1 port 57308 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:06.177804 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:06.181686 systemd-logind[1561]: New session 20 of user core. Apr 21 10:21:06.192504 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:21:06.526309 sshd[6549]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:06.537139 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:57312.service - OpenSSH per-connection server daemon (10.0.0.1:57312). Apr 21 10:21:06.537478 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:57308.service: Deactivated successfully. Apr 21 10:21:06.538761 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:21:06.540779 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:21:06.542488 systemd-logind[1561]: Removed session 20. Apr 21 10:21:06.567715 sshd[6565]: Accepted publickey for core from 10.0.0.1 port 57312 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:06.569224 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:06.573530 systemd-logind[1561]: New session 21 of user core. Apr 21 10:21:06.586689 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:21:06.709987 sshd[6565]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:06.712921 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:57312.service: Deactivated successfully. Apr 21 10:21:06.715289 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:21:06.715436 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:21:06.716429 systemd-logind[1561]: Removed session 21. Apr 21 10:21:11.740184 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:56516.service - OpenSSH per-connection server daemon (10.0.0.1:56516). Apr 21 10:21:11.777633 sshd[6589]: Accepted publickey for core from 10.0.0.1 port 56516 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:11.778747 sshd[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:11.787206 systemd-logind[1561]: New session 22 of user core. Apr 21 10:21:11.791309 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:21:11.920665 sshd[6589]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:11.923463 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:56516.service: Deactivated successfully. Apr 21 10:21:11.925113 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:21:11.925139 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:21:11.925960 systemd-logind[1561]: Removed session 22. Apr 21 10:21:16.949209 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:56522.service - OpenSSH per-connection server daemon (10.0.0.1:56522). Apr 21 10:21:16.976868 sshd[6636]: Accepted publickey for core from 10.0.0.1 port 56522 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:16.978586 sshd[6636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:16.982402 systemd-logind[1561]: New session 23 of user core. Apr 21 10:21:16.989241 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:21:17.125072 sshd[6636]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:17.127662 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:56522.service: Deactivated successfully. Apr 21 10:21:17.129403 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:21:17.129410 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:21:17.130249 systemd-logind[1561]: Removed session 23. Apr 21 10:21:18.386110 kubelet[2671]: E0421 10:21:18.386020 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:22.140246 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:54416.service - OpenSSH per-connection server daemon (10.0.0.1:54416). Apr 21 10:21:22.179111 sshd[6693]: Accepted publickey for core from 10.0.0.1 port 54416 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:22.180554 sshd[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:22.187006 systemd-logind[1561]: New session 24 of user core. Apr 21 10:21:22.196182 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:21:22.310603 sshd[6693]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:22.313615 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:54416.service: Deactivated successfully. Apr 21 10:21:22.315469 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:21:22.315481 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:21:22.316429 systemd-logind[1561]: Removed session 24.