Jan 24 00:53:07.049386 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:53:07.049406 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:53:07.049417 kernel: BIOS-provided physical RAM map: Jan 24 00:53:07.049423 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:53:07.049428 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:53:07.049434 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:53:07.049440 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:53:07.049446 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:53:07.049451 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:53:07.049519 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:53:07.049526 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:53:07.049531 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:53:07.049536 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:53:07.049542 kernel: NX (Execute Disable) protection: active Jan 24 00:53:07.049567 kernel: APIC: Static calls initialized Jan 24 00:53:07.049576 kernel: SMBIOS 2.8 present. Jan 24 00:53:07.049582 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:53:07.049588 kernel: Hypervisor detected: KVM Jan 24 00:53:07.049593 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:53:07.049599 kernel: kvm-clock: using sched offset of 3929131731 cycles Jan 24 00:53:07.049605 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:53:07.049611 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:53:07.049617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:53:07.049623 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:53:07.049632 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:53:07.049638 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:53:07.049644 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:53:07.049650 kernel: Using GB pages for direct mapping Jan 24 00:53:07.049655 kernel: ACPI: Early table checksum verification disabled Jan 24 00:53:07.049661 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:53:07.049667 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049673 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049679 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049687 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:53:07.049693 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049699 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049705 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049710 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:53:07.049716 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:53:07.049722 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:53:07.049731 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:53:07.049740 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:53:07.049746 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:53:07.049752 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:53:07.049759 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:53:07.049765 kernel: No NUMA configuration found Jan 24 00:53:07.049771 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:53:07.049779 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:53:07.049785 kernel: Zone ranges: Jan 24 00:53:07.049791 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:53:07.049797 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:53:07.049803 kernel: Normal empty Jan 24 00:53:07.049810 kernel: Movable zone start for each node Jan 24 00:53:07.049816 kernel: Early memory node ranges Jan 24 00:53:07.049822 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:53:07.049828 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:53:07.049834 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:53:07.049843 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:53:07.049849 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:53:07.049855 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:53:07.049861 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:53:07.049867 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:53:07.049873 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:53:07.049879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:53:07.049886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:53:07.049913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:53:07.049922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:53:07.049928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:53:07.049934 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:53:07.049940 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:53:07.049946 kernel: TSC deadline timer available Jan 24 00:53:07.049971 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:53:07.049977 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:53:07.050000 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:53:07.050006 kernel: kvm-guest: setup PV sched yield Jan 24 00:53:07.050015 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:53:07.050039 kernel: Booting paravirtualized kernel on KVM Jan 24 00:53:07.050046 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:53:07.050052 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:53:07.050058 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:53:07.050081 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:53:07.050087 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:53:07.050093 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:53:07.050116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:53:07.050126 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:53:07.050176 kernel: random: crng init done Jan 24 00:53:07.050184 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:53:07.050191 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:53:07.050197 kernel: Fallback order for Node 0: 0 Jan 24 00:53:07.050203 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:53:07.050209 kernel: Policy zone: DMA32 Jan 24 00:53:07.050233 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:53:07.050240 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:53:07.050249 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:53:07.050255 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:53:07.050278 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:53:07.050285 kernel: Dynamic Preempt: voluntary Jan 24 00:53:07.050291 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:53:07.050298 kernel: rcu: RCU event tracing is enabled. Jan 24 00:53:07.050304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:53:07.050311 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:53:07.050319 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:53:07.050326 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:53:07.050332 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:53:07.050338 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:53:07.050344 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:53:07.050350 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:53:07.050357 kernel: Console: colour VGA+ 80x25 Jan 24 00:53:07.050363 kernel: printk: console [ttyS0] enabled Jan 24 00:53:07.050369 kernel: ACPI: Core revision 20230628 Jan 24 00:53:07.050375 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:53:07.050384 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:53:07.050390 kernel: x2apic enabled Jan 24 00:53:07.050396 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:53:07.050403 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:53:07.050409 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:53:07.050415 kernel: kvm-guest: setup PV IPIs Jan 24 00:53:07.050421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:53:07.050437 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:53:07.050444 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:53:07.050450 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:53:07.050517 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:53:07.050528 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:53:07.050535 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:53:07.050541 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:53:07.050548 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:53:07.050555 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:53:07.050563 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:53:07.050570 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:53:07.050577 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:53:07.050583 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:53:07.050590 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:53:07.050596 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:53:07.050603 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:53:07.050609 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:53:07.050618 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:53:07.050624 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:53:07.050631 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:53:07.050637 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:53:07.050643 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:53:07.050650 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:53:07.050656 kernel: landlock: Up and running. Jan 24 00:53:07.050662 kernel: SELinux: Initializing. Jan 24 00:53:07.050669 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:53:07.050678 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:53:07.050684 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:53:07.050690 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:53:07.050697 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:53:07.050704 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:53:07.050710 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:53:07.050716 kernel: signal: max sigframe size: 1776 Jan 24 00:53:07.050723 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:53:07.050729 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:53:07.050738 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:53:07.050745 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:53:07.050751 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:53:07.050757 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:53:07.050764 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:53:07.050770 kernel: smpboot: Max logical packages: 1 Jan 24 00:53:07.050777 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:53:07.050783 kernel: devtmpfs: initialized Jan 24 00:53:07.050789 kernel: x86/mm: Memory block size: 128MB Jan 24 00:53:07.050798 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:53:07.050805 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:53:07.050811 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:53:07.050817 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:53:07.050824 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:53:07.050830 kernel: audit: type=2000 audit(1769215985.820:1): state=initialized audit_enabled=0 res=1 Jan 24 00:53:07.050836 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:53:07.050843 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:53:07.050849 kernel: cpuidle: using governor menu Jan 24 00:53:07.050858 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:53:07.050864 kernel: dca service started, version 1.12.1 Jan 24 00:53:07.050871 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:53:07.050877 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:53:07.050883 kernel: PCI: Using configuration type 1 for base access Jan 24 00:53:07.050890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:53:07.050896 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:53:07.050903 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:53:07.050909 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:53:07.050918 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:53:07.050924 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:53:07.050930 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:53:07.050937 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:53:07.050944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:53:07.050950 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:53:07.050956 kernel: ACPI: Interpreter enabled Jan 24 00:53:07.050963 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:53:07.050969 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:53:07.050977 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:53:07.050984 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:53:07.050990 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:53:07.050997 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:53:07.051220 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:53:07.051358 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:53:07.051557 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:53:07.051573 kernel: PCI host bridge to bus 0000:00 Jan 24 00:53:07.051700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:53:07.051812 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:53:07.051922 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:53:07.052030 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:53:07.052139 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:53:07.052296 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:53:07.052413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:53:07.052607 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:53:07.052740 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:53:07.052861 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:53:07.052981 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:53:07.053099 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:53:07.053267 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:53:07.053409 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:53:07.053587 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:53:07.053713 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:53:07.053833 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:53:07.053961 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:53:07.054081 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:53:07.054240 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:53:07.054372 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:53:07.054554 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:53:07.054677 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:53:07.054800 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:53:07.054919 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:53:07.055066 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:53:07.055352 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:53:07.055551 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:53:07.055682 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:53:07.055801 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:53:07.055919 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:53:07.056044 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:53:07.056198 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:53:07.056213 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:53:07.056221 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:53:07.056227 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:53:07.056234 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:53:07.056241 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:53:07.056247 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:53:07.056253 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:53:07.056260 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:53:07.056266 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:53:07.056275 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:53:07.056282 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:53:07.056288 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:53:07.056294 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:53:07.056301 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:53:07.056307 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:53:07.056313 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:53:07.056320 kernel: iommu: Default domain type: Translated Jan 24 00:53:07.056326 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:53:07.056335 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:53:07.056342 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:53:07.056348 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:53:07.056355 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:53:07.056541 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:53:07.056665 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:53:07.056783 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:53:07.056792 kernel: vgaarb: loaded Jan 24 00:53:07.056802 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:53:07.056809 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:53:07.056816 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:53:07.056822 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:53:07.056829 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:53:07.056835 kernel: pnp: PnP ACPI init Jan 24 00:53:07.056964 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:53:07.056974 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:53:07.056984 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:53:07.056991 kernel: NET: Registered PF_INET protocol family Jan 24 00:53:07.056998 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:53:07.057005 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:53:07.057011 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:53:07.057018 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:53:07.057024 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:53:07.057031 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:53:07.057038 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:53:07.057047 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:53:07.057053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:53:07.057059 kernel: NET: Registered PF_XDP protocol family Jan 24 00:53:07.057211 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:53:07.057325 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:53:07.057436 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:53:07.057640 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:53:07.057753 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:53:07.057868 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:53:07.057877 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:53:07.057884 kernel: Initialise system trusted keyrings Jan 24 00:53:07.057890 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:53:07.057897 kernel: Key type asymmetric registered Jan 24 00:53:07.057903 kernel: Asymmetric key parser 'x509' registered Jan 24 00:53:07.057910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:53:07.057916 kernel: io scheduler mq-deadline registered Jan 24 00:53:07.057923 kernel: io scheduler kyber registered Jan 24 00:53:07.057929 kernel: io scheduler bfq registered Jan 24 00:53:07.057939 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:53:07.057946 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:53:07.057953 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:53:07.057959 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:53:07.057966 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:53:07.057973 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:53:07.057979 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:53:07.057986 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:53:07.057992 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:53:07.058120 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:53:07.058130 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:53:07.058284 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:53:07.058400 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:53:06 UTC (1769215986) Jan 24 00:53:07.058564 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:53:07.058575 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:53:07.058582 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:53:07.058592 kernel: Segment Routing with IPv6 Jan 24 00:53:07.058598 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:53:07.058605 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:53:07.058611 kernel: Key type dns_resolver registered Jan 24 00:53:07.058618 kernel: IPI shorthand broadcast: enabled Jan 24 00:53:07.058624 kernel: sched_clock: Marking stable (1033019689, 338758506)->(1689879509, -318101314) Jan 24 00:53:07.058631 kernel: registered taskstats version 1 Jan 24 00:53:07.058637 kernel: Loading compiled-in X.509 certificates Jan 24 00:53:07.058644 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:53:07.058653 kernel: Key type .fscrypt registered Jan 24 00:53:07.058659 kernel: Key type fscrypt-provisioning registered Jan 24 00:53:07.058665 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:53:07.058672 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:53:07.058678 kernel: ima: No architecture policies found Jan 24 00:53:07.058685 kernel: clk: Disabling unused clocks Jan 24 00:53:07.058691 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:53:07.058698 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:53:07.058705 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:53:07.058713 kernel: Run /init as init process Jan 24 00:53:07.058720 kernel: with arguments: Jan 24 00:53:07.058726 kernel: /init Jan 24 00:53:07.058732 kernel: with environment: Jan 24 00:53:07.058739 kernel: HOME=/ Jan 24 00:53:07.058745 kernel: TERM=linux Jan 24 00:53:07.058753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:53:07.058762 systemd[1]: Detected virtualization kvm. Jan 24 00:53:07.058771 systemd[1]: Detected architecture x86-64. Jan 24 00:53:07.058778 systemd[1]: Running in initrd. Jan 24 00:53:07.058785 systemd[1]: No hostname configured, using default hostname. Jan 24 00:53:07.058791 systemd[1]: Hostname set to . Jan 24 00:53:07.058798 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:53:07.058805 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:53:07.058812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:53:07.058819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:53:07.058829 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:53:07.058836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:53:07.058843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:53:07.058850 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:53:07.058859 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:53:07.058866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:53:07.058872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:53:07.058882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:53:07.058889 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:53:07.058896 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:53:07.058903 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:53:07.058921 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:53:07.058930 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:53:07.058940 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:53:07.058947 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:53:07.058954 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:53:07.058961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:53:07.058969 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:53:07.058976 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:53:07.058983 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:53:07.058990 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:53:07.058997 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:53:07.059006 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:53:07.059013 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:53:07.059020 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:53:07.059027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:53:07.059034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:07.059059 systemd-journald[194]: Collecting audit messages is disabled. Jan 24 00:53:07.059079 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:53:07.059087 systemd-journald[194]: Journal started Jan 24 00:53:07.059102 systemd-journald[194]: Runtime Journal (/run/log/journal/ebd472be1df9469eb64f11eb48766516) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:53:07.064594 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:53:07.069038 systemd-modules-load[195]: Inserted module 'overlay' Jan 24 00:53:07.217562 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:53:07.217590 kernel: Bridge firewalling registered Jan 24 00:53:07.070090 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:53:07.096098 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 24 00:53:07.227510 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:53:07.232638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:53:07.239401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:07.260797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:53:07.261990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:53:07.263908 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:53:07.280592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:53:07.281112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:53:07.283753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:53:07.290882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:53:07.292842 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:53:07.308074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:07.327730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:53:07.337034 dracut-cmdline[223]: dracut-dracut-053 Jan 24 00:53:07.337034 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:53:07.368368 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:53:07.386754 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:53:07.421284 systemd-resolved[278]: Positive Trust Anchors: Jan 24 00:53:07.421332 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:53:07.421373 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:53:07.424852 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 24 00:53:07.426422 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:53:07.460077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:53:07.471537 kernel: SCSI subsystem initialized Jan 24 00:53:07.483542 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:53:07.496585 kernel: iscsi: registered transport (tcp) Jan 24 00:53:07.518433 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:53:07.518571 kernel: QLogic iSCSI HBA Driver Jan 24 00:53:07.576986 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:53:07.593676 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:53:07.630215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:53:07.630270 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:53:07.633142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:53:07.683184 kernel: raid6: avx2x4 gen() 29704 MB/s Jan 24 00:53:07.700531 kernel: raid6: avx2x2 gen() 27106 MB/s Jan 24 00:53:07.719892 kernel: raid6: avx2x1 gen() 23233 MB/s Jan 24 00:53:07.719986 kernel: raid6: using algorithm avx2x4 gen() 29704 MB/s Jan 24 00:53:07.739844 kernel: raid6: .... xor() 4842 MB/s, rmw enabled Jan 24 00:53:07.739879 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:53:07.762592 kernel: xor: automatically using best checksumming function avx Jan 24 00:53:07.924542 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:53:07.937911 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:53:07.957631 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:53:07.973960 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 24 00:53:07.978664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:53:07.998693 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:53:08.017363 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 24 00:53:08.065377 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:53:08.089717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:53:08.180588 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:53:08.199859 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:53:08.210703 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:53:08.220719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:53:08.230646 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:53:08.231150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:53:08.238828 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:53:08.246989 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:53:08.256552 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:53:08.256588 kernel: GPT:9289727 != 19775487 Jan 24 00:53:08.256602 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:53:08.256611 kernel: GPT:9289727 != 19775487 Jan 24 00:53:08.256619 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:53:08.260101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:53:08.273522 kernel: libata version 3.00 loaded. Jan 24 00:53:08.275899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:53:08.284141 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:53:08.290438 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:53:08.310795 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Jan 24 00:53:08.310815 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (471) Jan 24 00:53:08.313863 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:53:08.314060 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:53:08.320996 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:53:08.337980 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:53:08.338004 kernel: AES CTR mode by8 optimization enabled Jan 24 00:53:08.338015 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:53:08.338270 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:53:08.332868 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:53:08.351296 kernel: scsi host0: ahci Jan 24 00:53:08.351561 kernel: scsi host1: ahci Jan 24 00:53:08.351735 kernel: scsi host2: ahci Jan 24 00:53:08.351892 kernel: scsi host3: ahci Jan 24 00:53:08.352074 kernel: scsi host4: ahci Jan 24 00:53:08.350694 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:53:08.383966 kernel: scsi host5: ahci Jan 24 00:53:08.384215 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:53:08.384230 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:53:08.384246 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:53:08.384256 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:53:08.384265 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:53:08.384275 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:53:08.372802 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:53:08.391931 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:53:08.416697 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:53:08.426223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:53:08.432683 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:53:08.426304 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:53:08.445617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:53:08.445634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:53:08.445644 disk-uuid[552]: Primary Header is updated. Jan 24 00:53:08.445644 disk-uuid[552]: Secondary Entries is updated. Jan 24 00:53:08.445644 disk-uuid[552]: Secondary Header is updated. Jan 24 00:53:08.450212 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:53:08.460553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:53:08.460639 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:08.467203 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:08.489235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:08.659215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:08.678682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:53:08.691515 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:08.691567 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:08.692611 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:08.697582 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:53:08.702505 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:08.702556 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:53:08.702576 kernel: ata3.00: applying bridge limits Jan 24 00:53:08.703865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:53:08.713993 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:53:08.716524 kernel: ata3.00: configured for UDMA/100 Jan 24 00:53:08.720534 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:53:08.770356 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:53:08.770765 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:53:08.786577 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:53:09.446542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:53:09.446757 disk-uuid[553]: The operation has completed successfully. Jan 24 00:53:09.480905 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:53:09.481079 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:53:09.522854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:53:09.535999 sh[595]: Success Jan 24 00:53:09.553572 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:53:09.610107 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:53:09.634087 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:53:09.638685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:53:09.657089 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:53:09.657214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:09.657234 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:53:09.660304 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:53:09.662637 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:53:09.673406 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:53:09.680754 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:53:09.696812 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:53:09.700574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:53:09.719896 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:53:09.719949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:09.719960 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:53:09.729571 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:53:09.742692 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:53:09.748903 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:53:09.756156 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:53:09.771728 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:53:09.833706 ignition[695]: Ignition 2.19.0 Jan 24 00:53:09.833718 ignition[695]: Stage: fetch-offline Jan 24 00:53:09.833753 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:09.833763 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:09.833904 ignition[695]: parsed url from cmdline: "" Jan 24 00:53:09.833910 ignition[695]: no config URL provided Jan 24 00:53:09.833918 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:53:09.833931 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:53:09.833965 ignition[695]: op(1): [started] loading QEMU firmware config module Jan 24 00:53:09.833973 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:53:09.854928 ignition[695]: op(1): [finished] loading QEMU firmware config module Jan 24 00:53:09.865960 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:53:09.885741 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:53:09.915324 systemd-networkd[783]: lo: Link UP Jan 24 00:53:09.915357 systemd-networkd[783]: lo: Gained carrier Jan 24 00:53:09.917116 systemd-networkd[783]: Enumeration completed Jan 24 00:53:09.918115 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:53:09.918120 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:53:09.919370 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:53:09.921373 systemd-networkd[783]: eth0: Link UP Jan 24 00:53:09.921377 systemd-networkd[783]: eth0: Gained carrier Jan 24 00:53:09.921385 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:53:09.927714 systemd[1]: Reached target network.target - Network. Jan 24 00:53:09.953627 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:53:10.097047 ignition[695]: parsing config with SHA512: 54ad0575f5cd974aa6cf7d09c11efc20c114c01718d3a0850a3862897c91ba1044e5384c67090784edd4a2a9e775e873b9ce5cdcc00b1733011bbd8513ecfb4d Jan 24 00:53:10.106324 unknown[695]: fetched base config from "system" Jan 24 00:53:10.106363 unknown[695]: fetched user config from "qemu" Jan 24 00:53:10.107039 ignition[695]: fetch-offline: fetch-offline passed Jan 24 00:53:10.107129 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.102 Jan 24 00:53:10.107134 ignition[695]: Ignition finished successfully Jan 24 00:53:10.107139 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jan 24 00:53:10.109927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:53:10.115123 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:53:10.127682 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:53:10.154421 ignition[787]: Ignition 2.19.0 Jan 24 00:53:10.154454 ignition[787]: Stage: kargs Jan 24 00:53:10.154911 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:10.158707 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:53:10.154931 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:10.156227 ignition[787]: kargs: kargs passed Jan 24 00:53:10.156291 ignition[787]: Ignition finished successfully Jan 24 00:53:10.179760 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:53:10.196990 ignition[795]: Ignition 2.19.0 Jan 24 00:53:10.197014 ignition[795]: Stage: disks Jan 24 00:53:10.199646 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:53:10.197241 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:10.204652 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:53:10.197255 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:10.209940 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:53:10.197964 ignition[795]: disks: disks passed Jan 24 00:53:10.216228 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:53:10.198004 ignition[795]: Ignition finished successfully Jan 24 00:53:10.222137 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:53:10.225243 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:53:10.241716 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:53:10.257859 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:53:10.263444 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:53:10.278649 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:53:10.387537 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:53:10.387982 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:53:10.393524 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:53:10.411614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:53:10.418944 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:53:10.436724 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Jan 24 00:53:10.436751 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:53:10.436762 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:10.436772 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:53:10.436781 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:53:10.440214 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:53:10.440280 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:53:10.440308 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:53:10.461336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:53:10.467283 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:53:10.485659 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:53:10.536311 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:53:10.546397 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:53:10.556974 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:53:10.564698 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:53:10.686911 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:53:10.704616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:53:10.712821 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:53:10.717992 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:53:10.726588 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:53:10.751231 ignition[927]: INFO : Ignition 2.19.0 Jan 24 00:53:10.751231 ignition[927]: INFO : Stage: mount Jan 24 00:53:10.757415 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:10.757415 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:10.757415 ignition[927]: INFO : mount: mount passed Jan 24 00:53:10.757415 ignition[927]: INFO : Ignition finished successfully Jan 24 00:53:10.755172 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:53:10.776630 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:53:10.785700 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:53:10.793546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:53:10.806570 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jan 24 00:53:10.806608 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:53:10.812093 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:53:10.812125 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:53:10.821540 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:53:10.822862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:53:10.851613 ignition[957]: INFO : Ignition 2.19.0 Jan 24 00:53:10.851613 ignition[957]: INFO : Stage: files Jan 24 00:53:10.856878 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:10.856878 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:10.856878 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:53:10.856878 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:53:10.856878 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:53:10.856878 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:53:10.884047 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:53:10.884047 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:53:10.884047 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:53:10.884047 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:53:10.857737 unknown[957]: wrote ssh authorized keys file for user: core Jan 24 00:53:10.915805 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:53:11.089239 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:53:11.089239 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:53:11.106327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:53:11.327760 systemd-networkd[783]: eth0: Gained IPv6LL Jan 24 00:53:11.380683 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:53:11.873137 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:53:11.873137 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:53:11.885992 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:53:11.947589 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:53:11.947589 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:53:11.947589 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:53:11.947589 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:53:11.947589 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:53:11.947589 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:53:11.947589 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:53:11.947589 ignition[957]: INFO : files: files passed Jan 24 00:53:11.947589 ignition[957]: INFO : Ignition finished successfully Jan 24 00:53:11.911543 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:53:11.953712 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:53:11.962052 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:53:11.970004 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:53:12.031006 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:53:11.970131 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:53:12.048115 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:53:12.048115 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:53:11.983959 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:53:12.079756 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:53:11.991790 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:53:12.000867 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:53:12.038120 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:53:12.038323 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:53:12.048048 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:53:12.053034 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:53:12.057745 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:53:12.058816 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:53:12.084980 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:53:12.103728 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:53:12.116607 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:53:12.120849 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:53:12.124866 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:53:12.131577 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:53:12.131729 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:53:12.139241 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:53:12.145885 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:53:12.152192 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:53:12.158892 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:53:12.162568 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:53:12.168410 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:53:12.174437 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:53:12.181906 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:53:12.285865 ignition[1011]: INFO : Ignition 2.19.0 Jan 24 00:53:12.285865 ignition[1011]: INFO : Stage: umount Jan 24 00:53:12.285865 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:53:12.285865 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:53:12.285865 ignition[1011]: INFO : umount: umount passed Jan 24 00:53:12.285865 ignition[1011]: INFO : Ignition finished successfully Jan 24 00:53:12.182099 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:53:12.183136 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:53:12.184131 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:53:12.184304 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:53:12.186174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:53:12.187154 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:53:12.188114 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:53:12.188455 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:53:12.188639 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:53:12.188732 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:53:12.190097 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:53:12.190256 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:53:12.191166 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:53:12.192069 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:53:12.195604 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:53:12.196052 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:53:12.196602 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:53:12.197058 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:53:12.197166 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:53:12.197660 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:53:12.197800 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:53:12.198033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:53:12.198156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:53:12.198610 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:53:12.198726 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:53:12.256860 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:53:12.262798 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:53:12.262951 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:53:12.273694 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:53:12.279366 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:53:12.279563 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:53:12.285991 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:53:12.286148 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:53:12.297138 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:53:12.297316 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:53:12.301733 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:53:12.302755 systemd[1]: Stopped target network.target - Network. Jan 24 00:53:12.305289 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:53:12.305423 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:53:12.306393 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:53:12.306562 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:53:12.307149 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:53:12.307285 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:53:12.309299 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:53:12.309370 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:53:12.311590 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:53:12.573936 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 24 00:53:12.311952 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:53:12.313129 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:53:12.313320 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:53:12.330772 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:53:12.330990 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:53:12.336545 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:53:12.336626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:53:12.354663 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 24 00:53:12.357904 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:53:12.358116 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:53:12.365721 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:53:12.365793 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:53:12.392813 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:53:12.398153 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:53:12.398293 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:53:12.406680 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:53:12.406755 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:12.413890 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:53:12.413962 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:53:12.420121 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:53:12.426610 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:53:12.426870 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:53:12.447190 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:53:12.447594 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:53:12.452943 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:53:12.453090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:53:12.459042 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:53:12.459107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:53:12.464641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:53:12.464687 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:53:12.467990 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:53:12.468046 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:53:12.474193 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:53:12.474276 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:53:12.480827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:53:12.480878 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:53:12.487363 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:53:12.487413 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:53:12.507000 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:53:12.514922 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:53:12.514988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:53:12.519734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:53:12.520120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:12.521281 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:53:12.521720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:53:12.522303 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:53:12.524241 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:53:12.538910 systemd[1]: Switching root. Jan 24 00:53:12.691834 systemd-journald[194]: Journal stopped Jan 24 00:53:13.908074 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:53:13.908141 kernel: SELinux: policy capability open_perms=1 Jan 24 00:53:13.908154 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:53:13.908175 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:53:13.908194 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:53:13.908290 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:53:13.908311 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:53:13.908326 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:53:13.908336 kernel: audit: type=1403 audit(1769215992.758:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:53:13.908352 systemd[1]: Successfully loaded SELinux policy in 50.253ms. Jan 24 00:53:13.908378 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.542ms. Jan 24 00:53:13.908394 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:53:13.908405 systemd[1]: Detected virtualization kvm. Jan 24 00:53:13.908416 systemd[1]: Detected architecture x86-64. Jan 24 00:53:13.908426 systemd[1]: Detected first boot. Jan 24 00:53:13.908440 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:53:13.908450 zram_generator::config[1055]: No configuration found. Jan 24 00:53:13.908549 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:53:13.908563 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:53:13.908574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:53:13.908585 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:53:13.908597 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:53:13.908608 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:53:13.908622 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:53:13.908633 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:53:13.908644 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:53:13.908656 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:53:13.908667 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:53:13.908677 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:53:13.908688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:53:13.908699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:53:13.908710 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:53:13.908724 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:53:13.908735 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:53:13.908746 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:53:13.908757 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:53:13.908768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:53:13.908779 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:53:13.908789 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:53:13.908800 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:53:13.908813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:53:13.908824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:53:13.908835 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:53:13.908846 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:53:13.908856 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:53:13.908867 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:53:13.908878 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:53:13.908890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:53:13.908903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:53:13.908914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:53:13.908925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:53:13.908936 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:53:13.908947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:53:13.908959 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:53:13.908970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:53:13.908981 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:53:13.908991 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:53:13.909004 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:53:13.909015 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:53:13.909026 systemd[1]: Reached target machines.target - Containers. Jan 24 00:53:13.909037 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:53:13.909048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:53:13.909059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:53:13.909069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:53:13.909080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:53:13.909093 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:53:13.909104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:53:13.909115 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:53:13.909127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:53:13.909138 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:53:13.909148 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:53:13.909159 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:53:13.909170 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:53:13.909181 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:53:13.909193 kernel: fuse: init (API version 7.39) Jan 24 00:53:13.909204 kernel: ACPI: bus type drm_connector registered Jan 24 00:53:13.909267 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:53:13.909280 kernel: loop: module loaded Jan 24 00:53:13.909290 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:53:13.909301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:53:13.909312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:53:13.909323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:53:13.909334 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:53:13.909370 systemd-journald[1140]: Collecting audit messages is disabled. Jan 24 00:53:13.909392 systemd[1]: Stopped verity-setup.service. Jan 24 00:53:13.909404 systemd-journald[1140]: Journal started Jan 24 00:53:13.909422 systemd-journald[1140]: Runtime Journal (/run/log/journal/ebd472be1df9469eb64f11eb48766516) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:53:13.411274 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:53:13.435851 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:53:13.436686 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:53:13.437083 systemd[1]: systemd-journald.service: Consumed 1.653s CPU time. Jan 24 00:53:13.917529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:53:13.923833 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:53:13.924952 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:53:13.928255 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:53:13.931723 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:53:13.934840 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:53:13.938582 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:53:13.942131 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:53:13.945293 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:53:13.949094 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:53:13.953105 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:53:13.953342 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:53:13.957351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:53:13.957591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:53:13.961305 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:53:13.961543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:53:13.965036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:53:13.965263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:53:13.969132 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:53:13.969370 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:53:13.973018 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:53:13.973247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:53:13.976798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:53:13.980660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:53:13.985098 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:53:13.997147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:53:14.004434 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:53:14.025611 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:53:14.030336 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:53:14.033814 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:53:14.033890 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:53:14.038344 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:53:14.050654 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:53:14.055559 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:53:14.058741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:53:14.060435 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:53:14.065154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:53:14.069084 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:53:14.070307 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:53:14.074088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:53:14.075625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:53:14.078331 systemd-journald[1140]: Time spent on flushing to /var/log/journal/ebd472be1df9469eb64f11eb48766516 is 39.939ms for 943 entries. Jan 24 00:53:14.078331 systemd-journald[1140]: System Journal (/var/log/journal/ebd472be1df9469eb64f11eb48766516) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:53:14.130570 systemd-journald[1140]: Received client request to flush runtime journal. Jan 24 00:53:14.130606 kernel: loop0: detected capacity change from 0 to 219144 Jan 24 00:53:14.091706 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:53:14.097296 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:53:14.107692 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:53:14.116170 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:53:14.120571 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:53:14.132420 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:53:14.140703 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:53:14.145099 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:53:14.150116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:14.153627 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:53:14.163981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:53:14.171064 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:53:14.180557 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:53:14.185074 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:53:14.191746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:53:14.196691 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:53:14.210136 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:53:14.211130 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:53:14.235512 kernel: loop2: detected capacity change from 0 to 140768 Jan 24 00:53:14.232835 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 24 00:53:14.232851 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 24 00:53:14.240104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:53:14.288522 kernel: loop3: detected capacity change from 0 to 219144 Jan 24 00:53:14.301548 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:53:14.315582 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:53:14.329937 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:53:14.330639 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 24 00:53:14.334781 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:53:14.334809 systemd[1]: Reloading... Jan 24 00:53:14.398517 zram_generator::config[1219]: No configuration found. Jan 24 00:53:14.487835 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:53:14.527515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:53:14.570935 systemd[1]: Reloading finished in 235 ms. Jan 24 00:53:14.606675 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:53:14.612634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:53:14.618797 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:53:14.649820 systemd[1]: Starting ensure-sysext.service... Jan 24 00:53:14.653409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:53:14.658805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:53:14.668682 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:53:14.668720 systemd[1]: Reloading... Jan 24 00:53:14.685777 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:53:14.686151 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:53:14.687376 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:53:14.688150 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 24 00:53:14.688306 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 24 00:53:14.691839 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Jan 24 00:53:14.692086 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:53:14.692095 systemd-tmpfiles[1259]: Skipping /boot Jan 24 00:53:14.712715 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:53:14.712838 systemd-tmpfiles[1259]: Skipping /boot Jan 24 00:53:14.730540 zram_generator::config[1289]: No configuration found. Jan 24 00:53:14.784541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1295) Jan 24 00:53:14.830523 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:53:14.838531 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:53:14.849514 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:53:14.856154 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:53:14.856838 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:53:14.861098 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:53:14.873654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:53:14.956050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:53:14.960766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:53:14.962099 systemd[1]: Reloading finished in 292 ms. Jan 24 00:53:14.999556 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:53:15.023157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:53:15.023775 kernel: kvm_amd: TSC scaling supported Jan 24 00:53:15.023836 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:53:15.023864 kernel: kvm_amd: Nested Paging enabled Jan 24 00:53:15.023905 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:53:15.023943 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:53:15.086689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:53:15.099642 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:53:15.113426 systemd[1]: Finished ensure-sysext.service. Jan 24 00:53:15.124959 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:53:15.144934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:53:15.161052 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:53:15.168128 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:53:15.173110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:53:15.175762 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:53:15.182655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:53:15.188813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:53:15.196744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:53:15.202977 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:53:15.208053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:53:15.213768 lvm[1367]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:53:15.215044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:53:15.223392 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:53:15.230285 augenrules[1381]: No rules Jan 24 00:53:15.231636 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:53:15.240841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:53:15.254710 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:53:15.257327 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:53:15.262986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:53:15.264381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:53:15.265538 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:53:15.269867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:53:15.270089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:53:15.270691 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:53:15.270883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:53:15.271371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:53:15.271631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:53:15.274152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:53:15.274716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:53:15.282694 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:53:15.284018 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:53:15.284117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:53:15.288673 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:53:15.294674 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:53:15.303958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:53:15.309900 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:53:15.318386 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:53:15.323884 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:53:15.329668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:53:15.336183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:53:15.344729 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:53:15.349062 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:53:15.357703 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:53:15.367734 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:53:15.388995 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:53:15.452635 systemd-networkd[1387]: lo: Link UP Jan 24 00:53:15.452668 systemd-networkd[1387]: lo: Gained carrier Jan 24 00:53:15.452763 systemd-resolved[1388]: Positive Trust Anchors: Jan 24 00:53:15.452772 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:53:15.452801 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:53:15.454434 systemd-networkd[1387]: Enumeration completed Jan 24 00:53:15.455699 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:53:15.455719 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:53:15.456449 systemd-resolved[1388]: Defaulting to hostname 'linux'. Jan 24 00:53:15.456884 systemd-networkd[1387]: eth0: Link UP Jan 24 00:53:15.456906 systemd-networkd[1387]: eth0: Gained carrier Jan 24 00:53:15.456917 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:53:15.493613 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:53:15.494677 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Jan 24 00:53:16.345171 systemd-resolved[1388]: Clock change detected. Flushing caches. Jan 24 00:53:16.345214 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:53:16.345268 systemd-timesyncd[1389]: Initial clock synchronization to Sat 2026-01-24 00:53:16.345101 UTC. Jan 24 00:53:16.389401 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:53:16.396477 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:53:16.400046 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:53:16.403659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:53:16.408118 systemd[1]: Reached target network.target - Network. Jan 24 00:53:16.410919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:53:16.414447 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:53:16.417683 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:53:16.421350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:53:16.425126 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:53:16.428800 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:53:16.428849 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:53:16.431474 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:53:16.434620 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:53:16.438040 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:53:16.441809 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:53:16.445762 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:53:16.450773 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:53:16.469338 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:53:16.474292 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:53:16.478474 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:53:16.482138 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:53:16.485196 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:53:16.488285 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:53:16.488337 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:53:16.489545 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:53:16.494170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:53:16.498413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:53:16.503051 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:53:16.506774 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:53:16.507374 jq[1427]: false Jan 24 00:53:16.508064 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:53:16.513190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:53:16.520188 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:53:16.525093 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:53:16.529154 dbus-daemon[1426]: [system] SELinux support is enabled Jan 24 00:53:16.536859 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:53:16.537854 extend-filesystems[1428]: Found loop3 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found loop4 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found loop5 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found sr0 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda1 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda2 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda3 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found usr Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda4 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda6 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda7 Jan 24 00:53:16.540353 extend-filesystems[1428]: Found vda9 Jan 24 00:53:16.540353 extend-filesystems[1428]: Checking size of /dev/vda9 Jan 24 00:53:16.540399 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:53:16.562488 extend-filesystems[1428]: Resized partition /dev/vda9 Jan 24 00:53:16.541166 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:53:16.544153 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:53:16.548840 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:53:16.552389 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:53:16.555638 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:53:16.555993 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:53:16.556434 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:53:16.556756 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:53:16.559819 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:53:16.560057 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:53:16.563914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:53:16.565283 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:53:16.565591 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:53:16.565615 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:53:16.572366 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:53:16.575834 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:53:16.575859 jq[1446]: true Jan 24 00:53:16.580777 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:53:16.582691 update_engine[1443]: I20260124 00:53:16.582623 1443 main.cc:92] Flatcar Update Engine starting Jan 24 00:53:16.584549 update_engine[1443]: I20260124 00:53:16.584518 1443 update_check_scheduler.cc:74] Next update check in 7m13s Jan 24 00:53:16.601869 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:53:16.604052 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1299) Jan 24 00:53:16.619219 tar[1449]: linux-amd64/LICENSE Jan 24 00:53:16.619510 tar[1449]: linux-amd64/helm Jan 24 00:53:16.633854 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:53:16.632517 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:53:16.663770 jq[1459]: true Jan 24 00:53:16.667166 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:53:16.667166 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:53:16.667166 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:53:16.683887 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Jan 24 00:53:16.669404 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:53:16.669628 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:53:16.711549 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:53:16.715530 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:53:16.720205 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:53:16.733605 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:53:16.733661 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:53:16.734877 systemd-logind[1439]: New seat seat0. Jan 24 00:53:16.738123 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:53:16.738313 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:53:16.797067 containerd[1458]: time="2026-01-24T00:53:16.796639373Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:53:16.816260 containerd[1458]: time="2026-01-24T00:53:16.816227294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819084 containerd[1458]: time="2026-01-24T00:53:16.819028352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819140 containerd[1458]: time="2026-01-24T00:53:16.819090839Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:53:16.819140 containerd[1458]: time="2026-01-24T00:53:16.819112419Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:53:16.819383 containerd[1458]: time="2026-01-24T00:53:16.819328653Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:53:16.819383 containerd[1458]: time="2026-01-24T00:53:16.819377785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819475 containerd[1458]: time="2026-01-24T00:53:16.819443407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819505 containerd[1458]: time="2026-01-24T00:53:16.819476809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819745 containerd[1458]: time="2026-01-24T00:53:16.819659711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819745 containerd[1458]: time="2026-01-24T00:53:16.819697761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819745 containerd[1458]: time="2026-01-24T00:53:16.819742365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819813 containerd[1458]: time="2026-01-24T00:53:16.819752774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.819872 containerd[1458]: time="2026-01-24T00:53:16.819841170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.820187 containerd[1458]: time="2026-01-24T00:53:16.820167689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:53:16.820386 containerd[1458]: time="2026-01-24T00:53:16.820339239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:53:16.820386 containerd[1458]: time="2026-01-24T00:53:16.820380876Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:53:16.820734 containerd[1458]: time="2026-01-24T00:53:16.820480433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:53:16.820734 containerd[1458]: time="2026-01-24T00:53:16.820536597Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:53:16.828547 containerd[1458]: time="2026-01-24T00:53:16.828505091Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:53:16.828547 containerd[1458]: time="2026-01-24T00:53:16.828541289Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:53:16.828671 containerd[1458]: time="2026-01-24T00:53:16.828556136Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:53:16.828671 containerd[1458]: time="2026-01-24T00:53:16.828570282Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:53:16.828671 containerd[1458]: time="2026-01-24T00:53:16.828583167Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:53:16.828763 containerd[1458]: time="2026-01-24T00:53:16.828750279Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:53:16.829134 containerd[1458]: time="2026-01-24T00:53:16.829117534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:53:16.829244 containerd[1458]: time="2026-01-24T00:53:16.829226177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:53:16.829332 containerd[1458]: time="2026-01-24T00:53:16.829297891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:53:16.829395 containerd[1458]: time="2026-01-24T00:53:16.829337495Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:53:16.829395 containerd[1458]: time="2026-01-24T00:53:16.829352853Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829395 containerd[1458]: time="2026-01-24T00:53:16.829364575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829395 containerd[1458]: time="2026-01-24T00:53:16.829379423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829395 containerd[1458]: time="2026-01-24T00:53:16.829391586Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829402967Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829416001Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829426561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829442701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829462989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829474801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829484829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829495099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829510 containerd[1458]: time="2026-01-24T00:53:16.829506901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829518262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829533531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829545042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829555652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829568396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829579356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829590417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829600897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829614161Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829632385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829643977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829675 containerd[1458]: time="2026-01-24T00:53:16.829654015Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829740477Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829844420Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829855361Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829866412Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829879997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829890617Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:53:16.829901 containerd[1458]: time="2026-01-24T00:53:16.829899033Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:53:16.830065 containerd[1458]: time="2026-01-24T00:53:16.829907769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:53:16.830986 containerd[1458]: time="2026-01-24T00:53:16.830175228Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:53:16.830986 containerd[1458]: time="2026-01-24T00:53:16.830226314Z" level=info msg="Connect containerd service" Jan 24 00:53:16.830986 containerd[1458]: time="2026-01-24T00:53:16.830254185Z" level=info msg="using legacy CRI server" Jan 24 00:53:16.830986 containerd[1458]: time="2026-01-24T00:53:16.830260327Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:53:16.830986 containerd[1458]: time="2026-01-24T00:53:16.830340637Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:53:16.832906 containerd[1458]: time="2026-01-24T00:53:16.832878534Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:53:16.833316 containerd[1458]: time="2026-01-24T00:53:16.833287628Z" level=info msg="Start subscribing containerd event" Jan 24 00:53:16.833379 containerd[1458]: time="2026-01-24T00:53:16.833366766Z" level=info msg="Start recovering state" Jan 24 00:53:16.833464 containerd[1458]: time="2026-01-24T00:53:16.833451784Z" level=info msg="Start event monitor" Jan 24 00:53:16.833512 containerd[1458]: time="2026-01-24T00:53:16.833501858Z" level=info msg="Start snapshots syncer" Jan 24 00:53:16.833591 containerd[1458]: time="2026-01-24T00:53:16.833579021Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:53:16.833914 containerd[1458]: time="2026-01-24T00:53:16.833898278Z" level=info msg="Start streaming server" Jan 24 00:53:16.834143 containerd[1458]: time="2026-01-24T00:53:16.833874924Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:53:16.834187 containerd[1458]: time="2026-01-24T00:53:16.834177649Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:53:16.834342 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:53:16.834578 containerd[1458]: time="2026-01-24T00:53:16.834548190Z" level=info msg="containerd successfully booted in 0.038872s" Jan 24 00:53:17.130104 tar[1449]: linux-amd64/README.md Jan 24 00:53:17.142026 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:53:17.202545 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:53:17.228628 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:53:17.250286 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:53:17.259211 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:53:17.259488 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:53:17.264687 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:53:17.281834 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:53:17.287517 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:53:17.291924 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:53:17.295489 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:53:18.065235 systemd-networkd[1387]: eth0: Gained IPv6LL Jan 24 00:53:18.068135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:53:18.074026 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:53:18.091433 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:53:18.097215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:18.103685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:53:18.127694 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:53:18.128047 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:53:18.132020 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:53:18.137169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:53:18.898081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:18.902422 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:53:18.903413 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:18.906240 systemd[1]: Startup finished in 1.188s (kernel) + 6.013s (initrd) + 5.348s (userspace) = 12.550s. Jan 24 00:53:19.320596 kubelet[1538]: E0124 00:53:19.320480 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:19.324700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:19.325055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:20.961511 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:53:20.963012 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:57516.service - OpenSSH per-connection server daemon (10.0.0.1:57516). Jan 24 00:53:21.017608 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 57516 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:21.020286 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:21.031053 systemd-logind[1439]: New session 1 of user core. Jan 24 00:53:21.032485 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:53:21.041272 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:53:21.054545 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:53:21.067331 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:53:21.070641 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:53:21.170999 systemd[1555]: Queued start job for default target default.target. Jan 24 00:53:21.182246 systemd[1555]: Created slice app.slice - User Application Slice. Jan 24 00:53:21.182300 systemd[1555]: Reached target paths.target - Paths. Jan 24 00:53:21.182316 systemd[1555]: Reached target timers.target - Timers. Jan 24 00:53:21.184145 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:53:21.198146 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:53:21.198293 systemd[1555]: Reached target sockets.target - Sockets. Jan 24 00:53:21.198329 systemd[1555]: Reached target basic.target - Basic System. Jan 24 00:53:21.198367 systemd[1555]: Reached target default.target - Main User Target. Jan 24 00:53:21.198406 systemd[1555]: Startup finished in 120ms. Jan 24 00:53:21.198578 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:53:21.200292 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:53:21.264109 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:57524.service - OpenSSH per-connection server daemon (10.0.0.1:57524). Jan 24 00:53:21.305625 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 57524 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:21.307815 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:21.313372 systemd-logind[1439]: New session 2 of user core. Jan 24 00:53:21.323157 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:53:21.381510 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:21.394523 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:57524.service: Deactivated successfully. Jan 24 00:53:21.397108 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:53:21.399331 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:53:21.406442 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:57532.service - OpenSSH per-connection server daemon (10.0.0.1:57532). Jan 24 00:53:21.407802 systemd-logind[1439]: Removed session 2. Jan 24 00:53:21.440490 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 57532 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:21.442236 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:21.448657 systemd-logind[1439]: New session 3 of user core. Jan 24 00:53:21.466355 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:53:21.522396 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:21.530876 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:57532.service: Deactivated successfully. Jan 24 00:53:21.533477 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:53:21.535684 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:53:21.548529 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:57548.service - OpenSSH per-connection server daemon (10.0.0.1:57548). Jan 24 00:53:21.550118 systemd-logind[1439]: Removed session 3. Jan 24 00:53:21.587269 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 57548 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:21.589230 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:21.596197 systemd-logind[1439]: New session 4 of user core. Jan 24 00:53:21.603178 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:53:21.665329 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:21.684403 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:57548.service: Deactivated successfully. Jan 24 00:53:21.687234 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:53:21.689707 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:53:21.702446 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:57556.service - OpenSSH per-connection server daemon (10.0.0.1:57556). Jan 24 00:53:21.703649 systemd-logind[1439]: Removed session 4. Jan 24 00:53:21.739588 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 57556 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:21.741439 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:21.748570 systemd-logind[1439]: New session 5 of user core. Jan 24 00:53:21.765484 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:53:21.834129 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:53:21.834687 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:53:21.856118 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 24 00:53:21.858878 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:21.878892 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:57556.service: Deactivated successfully. Jan 24 00:53:21.880908 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:53:21.883011 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:53:21.890496 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:57566.service - OpenSSH per-connection server daemon (10.0.0.1:57566). Jan 24 00:53:21.892158 systemd-logind[1439]: Removed session 5. Jan 24 00:53:21.930585 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 57566 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:21.932628 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:21.938246 systemd-logind[1439]: New session 6 of user core. Jan 24 00:53:21.952211 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:53:22.013167 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:53:22.013716 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:53:22.019397 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 24 00:53:22.026843 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:53:22.027400 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:53:22.051416 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:53:22.054656 auditctl[1602]: No rules Jan 24 00:53:22.056321 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:53:22.056734 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:53:22.059595 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:53:22.107121 augenrules[1620]: No rules Jan 24 00:53:22.108175 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:53:22.109537 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 24 00:53:22.111510 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:22.123403 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:57566.service: Deactivated successfully. Jan 24 00:53:22.125187 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:53:22.126638 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:53:22.134303 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:57578.service - OpenSSH per-connection server daemon (10.0.0.1:57578). Jan 24 00:53:22.135344 systemd-logind[1439]: Removed session 6. Jan 24 00:53:22.168573 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 57578 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:22.170583 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:22.176702 systemd-logind[1439]: New session 7 of user core. Jan 24 00:53:22.186232 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:53:22.243194 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:53:22.243588 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:53:22.545224 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:53:22.545391 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:53:22.820484 dockerd[1650]: time="2026-01-24T00:53:22.820297251Z" level=info msg="Starting up" Jan 24 00:53:23.038856 dockerd[1650]: time="2026-01-24T00:53:23.038711535Z" level=info msg="Loading containers: start." Jan 24 00:53:23.198997 kernel: Initializing XFRM netlink socket Jan 24 00:53:23.326513 systemd-networkd[1387]: docker0: Link UP Jan 24 00:53:23.360820 dockerd[1650]: time="2026-01-24T00:53:23.360678145Z" level=info msg="Loading containers: done." Jan 24 00:53:23.379530 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck649471343-merged.mount: Deactivated successfully. Jan 24 00:53:23.382277 dockerd[1650]: time="2026-01-24T00:53:23.382203430Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:53:23.382387 dockerd[1650]: time="2026-01-24T00:53:23.382335887Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:53:23.382502 dockerd[1650]: time="2026-01-24T00:53:23.382449639Z" level=info msg="Daemon has completed initialization" Jan 24 00:53:23.431462 dockerd[1650]: time="2026-01-24T00:53:23.431346633Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:53:23.431600 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:53:24.182581 containerd[1458]: time="2026-01-24T00:53:24.182474729Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:53:24.709715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231892153.mount: Deactivated successfully. Jan 24 00:53:25.673360 containerd[1458]: time="2026-01-24T00:53:25.673299951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:25.674391 containerd[1458]: time="2026-01-24T00:53:25.674351093Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 24 00:53:25.675655 containerd[1458]: time="2026-01-24T00:53:25.675561843Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:25.678768 containerd[1458]: time="2026-01-24T00:53:25.678677588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:25.680246 containerd[1458]: time="2026-01-24T00:53:25.680166295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.497620915s" Jan 24 00:53:25.680246 containerd[1458]: time="2026-01-24T00:53:25.680233611Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:53:25.681067 containerd[1458]: time="2026-01-24T00:53:25.681030521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:53:26.608243 containerd[1458]: time="2026-01-24T00:53:26.608141900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:26.608737 containerd[1458]: time="2026-01-24T00:53:26.608695449Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 24 00:53:26.610026 containerd[1458]: time="2026-01-24T00:53:26.609889567Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:26.613233 containerd[1458]: time="2026-01-24T00:53:26.613160323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:26.614496 containerd[1458]: time="2026-01-24T00:53:26.614454894Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 933.378258ms" Jan 24 00:53:26.614548 containerd[1458]: time="2026-01-24T00:53:26.614501181Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:53:26.615191 containerd[1458]: time="2026-01-24T00:53:26.615128807Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:53:27.539322 containerd[1458]: time="2026-01-24T00:53:27.539231513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:27.540361 containerd[1458]: time="2026-01-24T00:53:27.540290919Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 24 00:53:27.541677 containerd[1458]: time="2026-01-24T00:53:27.541615953Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:27.545005 containerd[1458]: time="2026-01-24T00:53:27.544869165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:27.546072 containerd[1458]: time="2026-01-24T00:53:27.545996837Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 930.810062ms" Jan 24 00:53:27.546072 containerd[1458]: time="2026-01-24T00:53:27.546067759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:53:27.546903 containerd[1458]: time="2026-01-24T00:53:27.546687472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:53:28.519364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230411912.mount: Deactivated successfully. Jan 24 00:53:28.805475 containerd[1458]: time="2026-01-24T00:53:28.805245105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:28.806763 containerd[1458]: time="2026-01-24T00:53:28.806710355Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 24 00:53:28.808076 containerd[1458]: time="2026-01-24T00:53:28.808013017Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:28.811170 containerd[1458]: time="2026-01-24T00:53:28.811081594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:28.811763 containerd[1458]: time="2026-01-24T00:53:28.811695545Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.264975724s" Jan 24 00:53:28.811809 containerd[1458]: time="2026-01-24T00:53:28.811762029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:53:28.812541 containerd[1458]: time="2026-01-24T00:53:28.812506008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:53:29.276445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553491866.mount: Deactivated successfully. Jan 24 00:53:29.528483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:53:29.539126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:29.741249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:29.746603 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:29.796395 kubelet[1905]: E0124 00:53:29.796210 1905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:29.801497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:29.801794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:30.312753 containerd[1458]: time="2026-01-24T00:53:30.312628322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:30.314009 containerd[1458]: time="2026-01-24T00:53:30.313696468Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 24 00:53:30.315128 containerd[1458]: time="2026-01-24T00:53:30.315023663Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:30.318589 containerd[1458]: time="2026-01-24T00:53:30.318471318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:30.321493 containerd[1458]: time="2026-01-24T00:53:30.321369968Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.508820489s" Jan 24 00:53:30.321493 containerd[1458]: time="2026-01-24T00:53:30.321424219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:53:30.322357 containerd[1458]: time="2026-01-24T00:53:30.322309459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:53:30.740997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642730437.mount: Deactivated successfully. Jan 24 00:53:30.761171 containerd[1458]: time="2026-01-24T00:53:30.761039770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:30.762235 containerd[1458]: time="2026-01-24T00:53:30.762187733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 24 00:53:30.763995 containerd[1458]: time="2026-01-24T00:53:30.763886802Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:30.767577 containerd[1458]: time="2026-01-24T00:53:30.767467996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:30.768538 containerd[1458]: time="2026-01-24T00:53:30.768437559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 446.070002ms" Jan 24 00:53:30.768538 containerd[1458]: time="2026-01-24T00:53:30.768484958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:53:30.769215 containerd[1458]: time="2026-01-24T00:53:30.769165738Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:53:31.203414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257266571.mount: Deactivated successfully. Jan 24 00:53:33.398838 containerd[1458]: time="2026-01-24T00:53:33.398655380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:33.399923 containerd[1458]: time="2026-01-24T00:53:33.399751829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 24 00:53:33.401138 containerd[1458]: time="2026-01-24T00:53:33.401077112Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:33.405326 containerd[1458]: time="2026-01-24T00:53:33.405285503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:33.406792 containerd[1458]: time="2026-01-24T00:53:33.406711084Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.637494079s" Jan 24 00:53:33.406792 containerd[1458]: time="2026-01-24T00:53:33.406762780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:53:36.507023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:36.520241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:36.546717 systemd[1]: Reloading requested from client PID 2031 ('systemctl') (unit session-7.scope)... Jan 24 00:53:36.546754 systemd[1]: Reloading... Jan 24 00:53:36.629001 zram_generator::config[2070]: No configuration found. Jan 24 00:53:36.771663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:53:36.872306 systemd[1]: Reloading finished in 325 ms. Jan 24 00:53:36.928241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:36.932787 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:53:36.933271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:36.948346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:37.095152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:37.101861 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:53:37.150510 kubelet[2120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:53:37.150510 kubelet[2120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:53:37.150841 kubelet[2120]: I0124 00:53:37.150527 2120 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:53:38.177659 kubelet[2120]: I0124 00:53:38.177551 2120 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:53:38.177659 kubelet[2120]: I0124 00:53:38.177598 2120 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:53:38.179642 kubelet[2120]: I0124 00:53:38.179591 2120 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:53:38.179642 kubelet[2120]: I0124 00:53:38.179632 2120 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:53:38.179870 kubelet[2120]: I0124 00:53:38.179817 2120 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:53:38.297325 kubelet[2120]: I0124 00:53:38.297214 2120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:53:38.300325 kubelet[2120]: E0124 00:53:38.300074 2120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:53:38.302564 kubelet[2120]: E0124 00:53:38.302519 2120 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:53:38.302617 kubelet[2120]: I0124 00:53:38.302581 2120 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:53:38.308618 kubelet[2120]: I0124 00:53:38.308566 2120 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:53:38.309548 kubelet[2120]: I0124 00:53:38.309432 2120 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:53:38.309717 kubelet[2120]: I0124 00:53:38.309486 2120 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:53:38.309717 kubelet[2120]: I0124 00:53:38.309694 2120 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:53:38.309717 kubelet[2120]: I0124 00:53:38.309704 2120 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:53:38.310038 kubelet[2120]: I0124 00:53:38.309794 2120 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:53:38.312861 kubelet[2120]: I0124 00:53:38.312725 2120 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:38.314846 kubelet[2120]: I0124 00:53:38.314737 2120 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:53:38.314846 kubelet[2120]: I0124 00:53:38.314782 2120 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:53:38.314846 kubelet[2120]: I0124 00:53:38.314811 2120 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:53:38.314846 kubelet[2120]: I0124 00:53:38.314826 2120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:53:38.315898 kubelet[2120]: E0124 00:53:38.315843 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:53:38.315898 kubelet[2120]: E0124 00:53:38.315856 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:53:38.317447 kubelet[2120]: I0124 00:53:38.317237 2120 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:53:38.318018 kubelet[2120]: I0124 00:53:38.317887 2120 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:53:38.318018 kubelet[2120]: I0124 00:53:38.318002 2120 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:53:38.318085 kubelet[2120]: W0124 00:53:38.318051 2120 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:53:38.321792 kubelet[2120]: I0124 00:53:38.321706 2120 server.go:1262] "Started kubelet" Jan 24 00:53:38.322695 kubelet[2120]: I0124 00:53:38.322060 2120 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:53:38.322695 kubelet[2120]: I0124 00:53:38.322119 2120 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:53:38.322695 kubelet[2120]: I0124 00:53:38.322461 2120 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:53:38.322695 kubelet[2120]: I0124 00:53:38.322576 2120 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:53:38.322695 kubelet[2120]: I0124 00:53:38.322605 2120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:53:38.325035 kubelet[2120]: I0124 00:53:38.324839 2120 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:53:38.326692 kubelet[2120]: I0124 00:53:38.325850 2120 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:53:38.326692 kubelet[2120]: E0124 00:53:38.326337 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:38.326692 kubelet[2120]: I0124 00:53:38.326366 2120 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:53:38.326692 kubelet[2120]: I0124 00:53:38.326549 2120 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:53:38.326692 kubelet[2120]: I0124 00:53:38.326586 2120 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:53:38.327029 kubelet[2120]: E0124 00:53:38.326834 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:53:38.327831 kubelet[2120]: E0124 00:53:38.327149 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Jan 24 00:53:38.330340 kubelet[2120]: E0124 00:53:38.326553 2120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84984efe1749 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:53:38.321643337 +0000 UTC m=+1.214575453,LastTimestamp:2026-01-24 00:53:38.321643337 +0000 UTC m=+1.214575453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:53:38.330746 kubelet[2120]: I0124 00:53:38.330650 2120 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:53:38.330844 kubelet[2120]: I0124 00:53:38.330806 2120 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:53:38.334403 kubelet[2120]: E0124 00:53:38.334353 2120 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:53:38.334403 kubelet[2120]: I0124 00:53:38.334382 2120 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:53:38.351709 kubelet[2120]: I0124 00:53:38.351666 2120 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:53:38.351709 kubelet[2120]: I0124 00:53:38.351707 2120 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:53:38.351839 kubelet[2120]: I0124 00:53:38.351723 2120 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:38.358333 kubelet[2120]: I0124 00:53:38.356822 2120 policy_none.go:49] "None policy: Start" Jan 24 00:53:38.364424 kubelet[2120]: I0124 00:53:38.358689 2120 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:53:38.364424 kubelet[2120]: I0124 00:53:38.360047 2120 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:53:38.364424 kubelet[2120]: I0124 00:53:38.358760 2120 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:53:38.364424 kubelet[2120]: I0124 00:53:38.362573 2120 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:53:38.364424 kubelet[2120]: I0124 00:53:38.362592 2120 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:53:38.364424 kubelet[2120]: I0124 00:53:38.362614 2120 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:53:38.364424 kubelet[2120]: E0124 00:53:38.362658 2120 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:53:38.364424 kubelet[2120]: E0124 00:53:38.363269 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:53:38.367417 kubelet[2120]: I0124 00:53:38.366478 2120 policy_none.go:47] "Start" Jan 24 00:53:38.372322 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:53:38.384686 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:53:38.389224 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:53:38.402689 kubelet[2120]: E0124 00:53:38.402535 2120 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:53:38.402989 kubelet[2120]: I0124 00:53:38.402750 2120 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:53:38.402989 kubelet[2120]: I0124 00:53:38.402762 2120 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:53:38.403145 kubelet[2120]: I0124 00:53:38.403084 2120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:53:38.404506 kubelet[2120]: E0124 00:53:38.404439 2120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:53:38.404506 kubelet[2120]: E0124 00:53:38.404500 2120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:53:38.477626 systemd[1]: Created slice kubepods-burstable-poda3b857b4d6b6a9964e311dc400e41647.slice - libcontainer container kubepods-burstable-poda3b857b4d6b6a9964e311dc400e41647.slice. Jan 24 00:53:38.496384 kubelet[2120]: E0124 00:53:38.496238 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:38.500653 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 24 00:53:38.504389 kubelet[2120]: I0124 00:53:38.504333 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:38.504814 kubelet[2120]: E0124 00:53:38.504743 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jan 24 00:53:38.510396 kubelet[2120]: E0124 00:53:38.510327 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:38.513662 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 24 00:53:38.516028 kubelet[2120]: E0124 00:53:38.515857 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:38.527786 kubelet[2120]: E0124 00:53:38.527751 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Jan 24 00:53:38.627580 kubelet[2120]: I0124 00:53:38.627504 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:38.627580 kubelet[2120]: I0124 00:53:38.627579 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3b857b4d6b6a9964e311dc400e41647-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b857b4d6b6a9964e311dc400e41647\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:38.627722 kubelet[2120]: I0124 00:53:38.627606 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3b857b4d6b6a9964e311dc400e41647-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b857b4d6b6a9964e311dc400e41647\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:38.627722 kubelet[2120]: I0124 00:53:38.627664 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3b857b4d6b6a9964e311dc400e41647-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a3b857b4d6b6a9964e311dc400e41647\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:38.627722 kubelet[2120]: I0124 00:53:38.627695 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:38.627782 kubelet[2120]: I0124 00:53:38.627731 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:38.627782 kubelet[2120]: I0124 00:53:38.627752 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:38.627894 kubelet[2120]: I0124 00:53:38.627840 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:38.627894 kubelet[2120]: I0124 00:53:38.627884 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:38.707373 kubelet[2120]: I0124 00:53:38.707100 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:38.707486 kubelet[2120]: E0124 00:53:38.707436 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jan 24 00:53:38.799996 kubelet[2120]: E0124 00:53:38.799827 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:38.800852 containerd[1458]: time="2026-01-24T00:53:38.800786632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a3b857b4d6b6a9964e311dc400e41647,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:38.813797 kubelet[2120]: E0124 00:53:38.813753 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:38.814412 containerd[1458]: time="2026-01-24T00:53:38.814321077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:38.818740 kubelet[2120]: E0124 00:53:38.818658 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:38.819586 containerd[1458]: time="2026-01-24T00:53:38.819357217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:38.928678 kubelet[2120]: E0124 00:53:38.928507 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Jan 24 00:53:39.109704 kubelet[2120]: I0124 00:53:39.109490 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:39.110163 kubelet[2120]: E0124 00:53:39.109993 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Jan 24 00:53:39.211857 kubelet[2120]: E0124 00:53:39.211812 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:53:39.231779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881289752.mount: Deactivated successfully. Jan 24 00:53:39.239825 containerd[1458]: time="2026-01-24T00:53:39.239689703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:39.240993 containerd[1458]: time="2026-01-24T00:53:39.240880255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:53:39.246138 containerd[1458]: time="2026-01-24T00:53:39.246017399Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:39.248324 containerd[1458]: time="2026-01-24T00:53:39.248255482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:39.249350 containerd[1458]: time="2026-01-24T00:53:39.249184786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:53:39.249350 containerd[1458]: time="2026-01-24T00:53:39.249324407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:39.250302 containerd[1458]: time="2026-01-24T00:53:39.250175054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:53:39.251420 containerd[1458]: time="2026-01-24T00:53:39.251310063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:39.255439 containerd[1458]: time="2026-01-24T00:53:39.255387814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.497608ms" Jan 24 00:53:39.260207 containerd[1458]: time="2026-01-24T00:53:39.260145264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.751262ms" Jan 24 00:53:39.260862 containerd[1458]: time="2026-01-24T00:53:39.260717618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 441.311429ms" Jan 24 00:53:39.297071 kubelet[2120]: E0124 00:53:39.297019 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:53:39.384356 containerd[1458]: time="2026-01-24T00:53:39.383751750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:39.384356 containerd[1458]: time="2026-01-24T00:53:39.383802234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:39.384356 containerd[1458]: time="2026-01-24T00:53:39.383816450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:39.384356 containerd[1458]: time="2026-01-24T00:53:39.383894185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:39.384507 containerd[1458]: time="2026-01-24T00:53:39.384432109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:39.385432 containerd[1458]: time="2026-01-24T00:53:39.384872682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:39.385432 containerd[1458]: time="2026-01-24T00:53:39.384887870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:39.385432 containerd[1458]: time="2026-01-24T00:53:39.385054491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:39.390415 containerd[1458]: time="2026-01-24T00:53:39.390073296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:39.390415 containerd[1458]: time="2026-01-24T00:53:39.390129430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:39.396240 containerd[1458]: time="2026-01-24T00:53:39.394795190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:39.396240 containerd[1458]: time="2026-01-24T00:53:39.395044404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:39.427133 systemd[1]: Started cri-containerd-1fd09c8d48c3f341a704ef192355a2016c92f143388885cf0fc9f1f02c81baa4.scope - libcontainer container 1fd09c8d48c3f341a704ef192355a2016c92f143388885cf0fc9f1f02c81baa4. Jan 24 00:53:39.429131 systemd[1]: Started cri-containerd-7b29dbd04d9917a573fba688f9f913fdcd6f02e3b19976b0c7667e37c5bf65b4.scope - libcontainer container 7b29dbd04d9917a573fba688f9f913fdcd6f02e3b19976b0c7667e37c5bf65b4. Jan 24 00:53:39.431486 systemd[1]: Started cri-containerd-b994baee694ed65aa43878dea814f4bfc54bb5db9667c969d97dd8b3251e87dd.scope - libcontainer container b994baee694ed65aa43878dea814f4bfc54bb5db9667c969d97dd8b3251e87dd. Jan 24 00:53:39.484881 containerd[1458]: time="2026-01-24T00:53:39.484671675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"b994baee694ed65aa43878dea814f4bfc54bb5db9667c969d97dd8b3251e87dd\"" Jan 24 00:53:39.491321 kubelet[2120]: E0124 00:53:39.491291 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:39.496538 containerd[1458]: time="2026-01-24T00:53:39.496458094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd09c8d48c3f341a704ef192355a2016c92f143388885cf0fc9f1f02c81baa4\"" Jan 24 00:53:39.499385 kubelet[2120]: E0124 00:53:39.499027 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:39.502673 containerd[1458]: time="2026-01-24T00:53:39.502601281Z" level=info msg="CreateContainer within sandbox \"b994baee694ed65aa43878dea814f4bfc54bb5db9667c969d97dd8b3251e87dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:53:39.508265 containerd[1458]: time="2026-01-24T00:53:39.508236952Z" level=info msg="CreateContainer within sandbox \"1fd09c8d48c3f341a704ef192355a2016c92f143388885cf0fc9f1f02c81baa4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:53:39.524143 containerd[1458]: time="2026-01-24T00:53:39.524055426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a3b857b4d6b6a9964e311dc400e41647,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b29dbd04d9917a573fba688f9f913fdcd6f02e3b19976b0c7667e37c5bf65b4\"" Jan 24 00:53:39.524858 kubelet[2120]: E0124 00:53:39.524824 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:39.529599 containerd[1458]: time="2026-01-24T00:53:39.529554261Z" level=info msg="CreateContainer within sandbox \"b994baee694ed65aa43878dea814f4bfc54bb5db9667c969d97dd8b3251e87dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a9c5a5eadd1f1c216c9b636e32b41419c34d9bc9674ca6c058f5a07f4ce20ef\"" Jan 24 00:53:39.531071 containerd[1458]: time="2026-01-24T00:53:39.530377173Z" level=info msg="StartContainer for \"0a9c5a5eadd1f1c216c9b636e32b41419c34d9bc9674ca6c058f5a07f4ce20ef\"" Jan 24 00:53:39.531245 containerd[1458]: time="2026-01-24T00:53:39.531181609Z" level=info msg="CreateContainer within sandbox \"7b29dbd04d9917a573fba688f9f913fdcd6f02e3b19976b0c7667e37c5bf65b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:53:39.540222 containerd[1458]: time="2026-01-24T00:53:39.540128303Z" level=info msg="CreateContainer within sandbox \"1fd09c8d48c3f341a704ef192355a2016c92f143388885cf0fc9f1f02c81baa4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a3c2ae178b4deb1411a9f767001c394d49722dd5dab6d95dc8fba9438275508\"" Jan 24 00:53:39.540830 containerd[1458]: time="2026-01-24T00:53:39.540743346Z" level=info msg="StartContainer for \"7a3c2ae178b4deb1411a9f767001c394d49722dd5dab6d95dc8fba9438275508\"" Jan 24 00:53:39.560667 containerd[1458]: time="2026-01-24T00:53:39.560524959Z" level=info msg="CreateContainer within sandbox \"7b29dbd04d9917a573fba688f9f913fdcd6f02e3b19976b0c7667e37c5bf65b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"692ba66c42df835dbd0f092e5fa0d3fdb32177183e8ebdda80643d3cdd85d485\"" Jan 24 00:53:39.561696 containerd[1458]: time="2026-01-24T00:53:39.561643923Z" level=info msg="StartContainer for \"692ba66c42df835dbd0f092e5fa0d3fdb32177183e8ebdda80643d3cdd85d485\"" Jan 24 00:53:39.583059 systemd[1]: Started cri-containerd-0a9c5a5eadd1f1c216c9b636e32b41419c34d9bc9674ca6c058f5a07f4ce20ef.scope - libcontainer container 0a9c5a5eadd1f1c216c9b636e32b41419c34d9bc9674ca6c058f5a07f4ce20ef. Jan 24 00:53:39.588855 systemd[1]: Started cri-containerd-7a3c2ae178b4deb1411a9f767001c394d49722dd5dab6d95dc8fba9438275508.scope - libcontainer container 7a3c2ae178b4deb1411a9f767001c394d49722dd5dab6d95dc8fba9438275508. Jan 24 00:53:39.624194 systemd[1]: Started cri-containerd-692ba66c42df835dbd0f092e5fa0d3fdb32177183e8ebdda80643d3cdd85d485.scope - libcontainer container 692ba66c42df835dbd0f092e5fa0d3fdb32177183e8ebdda80643d3cdd85d485. Jan 24 00:53:39.674620 containerd[1458]: time="2026-01-24T00:53:39.672556106Z" level=info msg="StartContainer for \"7a3c2ae178b4deb1411a9f767001c394d49722dd5dab6d95dc8fba9438275508\" returns successfully" Jan 24 00:53:39.674620 containerd[1458]: time="2026-01-24T00:53:39.672691800Z" level=info msg="StartContainer for \"0a9c5a5eadd1f1c216c9b636e32b41419c34d9bc9674ca6c058f5a07f4ce20ef\" returns successfully" Jan 24 00:53:39.684322 kubelet[2120]: E0124 00:53:39.684196 2120 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:53:39.701195 containerd[1458]: time="2026-01-24T00:53:39.701108320Z" level=info msg="StartContainer for \"692ba66c42df835dbd0f092e5fa0d3fdb32177183e8ebdda80643d3cdd85d485\" returns successfully" Jan 24 00:53:39.729775 kubelet[2120]: E0124 00:53:39.729643 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Jan 24 00:53:39.922531 kubelet[2120]: I0124 00:53:39.922428 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:40.378031 kubelet[2120]: E0124 00:53:40.377834 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:40.378427 kubelet[2120]: E0124 00:53:40.378110 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:40.387983 kubelet[2120]: E0124 00:53:40.386223 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:40.387983 kubelet[2120]: E0124 00:53:40.386334 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:40.390325 kubelet[2120]: E0124 00:53:40.390276 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:40.390504 kubelet[2120]: E0124 00:53:40.390420 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:41.014716 kubelet[2120]: I0124 00:53:41.014573 2120 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:53:41.014716 kubelet[2120]: E0124 00:53:41.014636 2120 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:53:41.037796 kubelet[2120]: E0124 00:53:41.037755 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.138095 kubelet[2120]: E0124 00:53:41.137845 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.239234 kubelet[2120]: E0124 00:53:41.239075 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.339731 kubelet[2120]: E0124 00:53:41.339575 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.391919 kubelet[2120]: E0124 00:53:41.391875 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:41.392560 kubelet[2120]: E0124 00:53:41.392099 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:41.392560 kubelet[2120]: E0124 00:53:41.392280 2120 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:41.392560 kubelet[2120]: E0124 00:53:41.392434 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:41.439871 kubelet[2120]: E0124 00:53:41.439745 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.541047 kubelet[2120]: E0124 00:53:41.540862 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.641678 kubelet[2120]: E0124 00:53:41.641496 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.741756 kubelet[2120]: E0124 00:53:41.741635 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:41.842058 kubelet[2120]: E0124 00:53:41.841878 2120 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:42.027182 kubelet[2120]: I0124 00:53:42.027119 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:42.037900 kubelet[2120]: I0124 00:53:42.037276 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:42.043225 kubelet[2120]: I0124 00:53:42.043190 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:42.318218 kubelet[2120]: I0124 00:53:42.317681 2120 apiserver.go:52] "Watching apiserver" Jan 24 00:53:42.320635 kubelet[2120]: E0124 00:53:42.320585 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:42.327522 kubelet[2120]: I0124 00:53:42.327392 2120 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:53:42.394251 kubelet[2120]: E0124 00:53:42.394083 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:42.395609 kubelet[2120]: I0124 00:53:42.395476 2120 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:42.403799 kubelet[2120]: E0124 00:53:42.403631 2120 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:42.403877 kubelet[2120]: E0124 00:53:42.403853 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:43.395434 kubelet[2120]: E0124 00:53:43.395302 2120 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:43.558138 systemd[1]: Reloading requested from client PID 2413 ('systemctl') (unit session-7.scope)... Jan 24 00:53:43.558178 systemd[1]: Reloading... Jan 24 00:53:43.648099 zram_generator::config[2455]: No configuration found. Jan 24 00:53:43.767753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:53:43.863068 systemd[1]: Reloading finished in 304 ms. Jan 24 00:53:43.946434 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:43.962158 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:53:43.962570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:43.962668 systemd[1]: kubelet.service: Consumed 1.785s CPU time, 129.4M memory peak, 0B memory swap peak. Jan 24 00:53:43.977373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:44.168514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:44.180555 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:53:44.242646 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:53:44.242646 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:53:44.243118 kubelet[2497]: I0124 00:53:44.242640 2497 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:53:44.249321 kubelet[2497]: I0124 00:53:44.249282 2497 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:53:44.249321 kubelet[2497]: I0124 00:53:44.249314 2497 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:53:44.249413 kubelet[2497]: I0124 00:53:44.249339 2497 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:53:44.249413 kubelet[2497]: I0124 00:53:44.249349 2497 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:53:44.250174 kubelet[2497]: I0124 00:53:44.250114 2497 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:53:44.252459 kubelet[2497]: I0124 00:53:44.252413 2497 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:53:44.254611 kubelet[2497]: I0124 00:53:44.254564 2497 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:53:44.260049 kubelet[2497]: E0124 00:53:44.257994 2497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:53:44.260049 kubelet[2497]: I0124 00:53:44.258071 2497 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:53:44.265706 kubelet[2497]: I0124 00:53:44.265571 2497 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:53:44.266150 kubelet[2497]: I0124 00:53:44.266036 2497 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:53:44.266196 kubelet[2497]: I0124 00:53:44.266080 2497 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:53:44.266304 kubelet[2497]: I0124 00:53:44.266203 2497 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:53:44.266304 kubelet[2497]: I0124 00:53:44.266212 2497 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:53:44.266304 kubelet[2497]: I0124 00:53:44.266236 2497 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:53:44.267041 kubelet[2497]: I0124 00:53:44.267011 2497 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:44.267302 kubelet[2497]: I0124 00:53:44.267259 2497 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:53:44.267302 kubelet[2497]: I0124 00:53:44.267294 2497 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:53:44.267370 kubelet[2497]: I0124 00:53:44.267313 2497 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:53:44.267370 kubelet[2497]: I0124 00:53:44.267332 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:53:44.270546 kubelet[2497]: I0124 00:53:44.270468 2497 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:53:44.272622 kubelet[2497]: I0124 00:53:44.272426 2497 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:53:44.273092 kubelet[2497]: I0124 00:53:44.272846 2497 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:53:44.278926 kubelet[2497]: I0124 00:53:44.278903 2497 server.go:1262] "Started kubelet" Jan 24 00:53:44.280476 kubelet[2497]: I0124 00:53:44.280461 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:53:44.283440 kubelet[2497]: I0124 00:53:44.283341 2497 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:53:44.284748 kubelet[2497]: I0124 00:53:44.284649 2497 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:53:44.284812 kubelet[2497]: I0124 00:53:44.284791 2497 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:53:44.285183 kubelet[2497]: I0124 00:53:44.285118 2497 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:53:44.285420 kubelet[2497]: I0124 00:53:44.285355 2497 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:53:44.285599 kubelet[2497]: E0124 00:53:44.285537 2497 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:44.286683 kubelet[2497]: I0124 00:53:44.286426 2497 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:53:44.291699 kubelet[2497]: I0124 00:53:44.291586 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:53:44.291910 kubelet[2497]: I0124 00:53:44.291868 2497 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:53:44.292116 kubelet[2497]: I0124 00:53:44.292066 2497 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:53:44.292278 kubelet[2497]: I0124 00:53:44.292235 2497 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:53:44.292670 kubelet[2497]: I0124 00:53:44.292550 2497 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:53:44.297388 kubelet[2497]: I0124 00:53:44.296592 2497 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:53:44.297388 kubelet[2497]: E0124 00:53:44.297361 2497 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:53:44.313913 kubelet[2497]: I0124 00:53:44.313786 2497 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:53:44.316066 kubelet[2497]: I0124 00:53:44.316022 2497 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:53:44.316113 kubelet[2497]: I0124 00:53:44.316081 2497 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:53:44.316113 kubelet[2497]: I0124 00:53:44.316104 2497 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:53:44.316272 kubelet[2497]: E0124 00:53:44.316151 2497 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:53:44.349678 kubelet[2497]: I0124 00:53:44.349472 2497 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:53:44.349678 kubelet[2497]: I0124 00:53:44.349498 2497 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:53:44.349678 kubelet[2497]: I0124 00:53:44.349519 2497 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:44.350419 kubelet[2497]: I0124 00:53:44.349825 2497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:53:44.350419 kubelet[2497]: I0124 00:53:44.350116 2497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:53:44.350419 kubelet[2497]: I0124 00:53:44.350248 2497 policy_none.go:49] "None policy: Start" Jan 24 00:53:44.350419 kubelet[2497]: I0124 00:53:44.350258 2497 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:53:44.350419 kubelet[2497]: I0124 00:53:44.350270 2497 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:53:44.352359 kubelet[2497]: I0124 00:53:44.352192 2497 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:53:44.352359 kubelet[2497]: I0124 00:53:44.352314 2497 policy_none.go:47] "Start" Jan 24 00:53:44.362018 kubelet[2497]: E0124 00:53:44.361875 2497 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:53:44.362159 kubelet[2497]: I0124 00:53:44.362116 2497 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:53:44.362159 kubelet[2497]: I0124 00:53:44.362128 2497 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:53:44.362464 kubelet[2497]: I0124 00:53:44.362386 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:53:44.365871 kubelet[2497]: E0124 00:53:44.365812 2497 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:53:44.418141 kubelet[2497]: I0124 00:53:44.418103 2497 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.418303 kubelet[2497]: I0124 00:53:44.418239 2497 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:44.420689 kubelet[2497]: I0124 00:53:44.420462 2497 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:44.428118 kubelet[2497]: E0124 00:53:44.427898 2497 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:44.429082 kubelet[2497]: E0124 00:53:44.429049 2497 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.429082 kubelet[2497]: E0124 00:53:44.429060 2497 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:44.468509 kubelet[2497]: I0124 00:53:44.468178 2497 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:44.477627 kubelet[2497]: I0124 00:53:44.477520 2497 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:53:44.477627 kubelet[2497]: I0124 00:53:44.477607 2497 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:53:44.486119 kubelet[2497]: I0124 00:53:44.486067 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.486119 kubelet[2497]: I0124 00:53:44.486133 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.486119 kubelet[2497]: I0124 00:53:44.486167 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3b857b4d6b6a9964e311dc400e41647-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b857b4d6b6a9964e311dc400e41647\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:44.486119 kubelet[2497]: I0124 00:53:44.486190 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.486412 kubelet[2497]: I0124 00:53:44.486213 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:44.486412 kubelet[2497]: I0124 00:53:44.486236 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3b857b4d6b6a9964e311dc400e41647-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b857b4d6b6a9964e311dc400e41647\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:44.486412 kubelet[2497]: I0124 00:53:44.486258 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3b857b4d6b6a9964e311dc400e41647-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a3b857b4d6b6a9964e311dc400e41647\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:44.486412 kubelet[2497]: I0124 00:53:44.486286 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.486412 kubelet[2497]: I0124 00:53:44.486329 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:44.728711 kubelet[2497]: E0124 00:53:44.728508 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:44.730176 kubelet[2497]: E0124 00:53:44.730065 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:44.730599 kubelet[2497]: E0124 00:53:44.730440 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:45.269283 kubelet[2497]: I0124 00:53:45.269128 2497 apiserver.go:52] "Watching apiserver" Jan 24 00:53:45.331636 kubelet[2497]: E0124 00:53:45.331565 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:45.331636 kubelet[2497]: E0124 00:53:45.331566 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:45.331888 kubelet[2497]: I0124 00:53:45.331796 2497 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:45.386676 kubelet[2497]: I0124 00:53:45.386500 2497 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:53:45.612874 kernel: hrtimer: interrupt took 3217446 ns Jan 24 00:53:45.629587 kubelet[2497]: E0124 00:53:45.629524 2497 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:45.629778 kubelet[2497]: E0124 00:53:45.629736 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:45.934853 kubelet[2497]: I0124 00:53:45.934486 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.934453905 podStartE2EDuration="3.934453905s" podCreationTimestamp="2026-01-24 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:45.904533802 +0000 UTC m=+1.718819939" watchObservedRunningTime="2026-01-24 00:53:45.934453905 +0000 UTC m=+1.748740040" Jan 24 00:53:45.934853 kubelet[2497]: I0124 00:53:45.934663 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.934656021 podStartE2EDuration="3.934656021s" podCreationTimestamp="2026-01-24 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:45.683572015 +0000 UTC m=+1.497858152" watchObservedRunningTime="2026-01-24 00:53:45.934656021 +0000 UTC m=+1.748942158" Jan 24 00:53:45.982433 kubelet[2497]: I0124 00:53:45.982360 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.9823414 podStartE2EDuration="3.9823414s" podCreationTimestamp="2026-01-24 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:45.966542889 +0000 UTC m=+1.780829026" watchObservedRunningTime="2026-01-24 00:53:45.9823414 +0000 UTC m=+1.796627546" Jan 24 00:53:46.335613 kubelet[2497]: E0124 00:53:46.335480 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:46.335613 kubelet[2497]: E0124 00:53:46.335566 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:47.338522 kubelet[2497]: E0124 00:53:47.338380 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:47.969751 kubelet[2497]: E0124 00:53:47.969611 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:49.497156 kubelet[2497]: I0124 00:53:49.497043 2497 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:53:49.497749 kubelet[2497]: I0124 00:53:49.497695 2497 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:53:49.497802 containerd[1458]: time="2026-01-24T00:53:49.497538869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:53:50.586516 systemd[1]: Created slice kubepods-besteffort-podc410542c_91db_42c6_ab23_ba7bc99f48a8.slice - libcontainer container kubepods-besteffort-podc410542c_91db_42c6_ab23_ba7bc99f48a8.slice. Jan 24 00:53:50.636318 kubelet[2497]: I0124 00:53:50.636266 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx5m\" (UniqueName: \"kubernetes.io/projected/c410542c-91db-42c6-ab23-ba7bc99f48a8-kube-api-access-gmx5m\") pod \"kube-proxy-rm2mm\" (UID: \"c410542c-91db-42c6-ab23-ba7bc99f48a8\") " pod="kube-system/kube-proxy-rm2mm" Jan 24 00:53:50.636318 kubelet[2497]: I0124 00:53:50.636315 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c410542c-91db-42c6-ab23-ba7bc99f48a8-kube-proxy\") pod \"kube-proxy-rm2mm\" (UID: \"c410542c-91db-42c6-ab23-ba7bc99f48a8\") " pod="kube-system/kube-proxy-rm2mm" Jan 24 00:53:50.636759 kubelet[2497]: I0124 00:53:50.636341 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c410542c-91db-42c6-ab23-ba7bc99f48a8-xtables-lock\") pod \"kube-proxy-rm2mm\" (UID: \"c410542c-91db-42c6-ab23-ba7bc99f48a8\") " pod="kube-system/kube-proxy-rm2mm" Jan 24 00:53:50.636759 kubelet[2497]: I0124 00:53:50.636354 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c410542c-91db-42c6-ab23-ba7bc99f48a8-lib-modules\") pod \"kube-proxy-rm2mm\" (UID: \"c410542c-91db-42c6-ab23-ba7bc99f48a8\") " pod="kube-system/kube-proxy-rm2mm" Jan 24 00:53:50.675894 systemd[1]: Created slice kubepods-besteffort-podae435103_d62d_485c_87e4_d7e0acde734f.slice - libcontainer container kubepods-besteffort-podae435103_d62d_485c_87e4_d7e0acde734f.slice. Jan 24 00:53:50.736927 kubelet[2497]: I0124 00:53:50.736821 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae435103-d62d-485c-87e4-d7e0acde734f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-lb9s7\" (UID: \"ae435103-d62d-485c-87e4-d7e0acde734f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lb9s7" Jan 24 00:53:50.736927 kubelet[2497]: I0124 00:53:50.736903 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd6vn\" (UniqueName: \"kubernetes.io/projected/ae435103-d62d-485c-87e4-d7e0acde734f-kube-api-access-qd6vn\") pod \"tigera-operator-65cdcdfd6d-lb9s7\" (UID: \"ae435103-d62d-485c-87e4-d7e0acde734f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lb9s7" Jan 24 00:53:50.900630 kubelet[2497]: E0124 00:53:50.900363 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:50.901876 containerd[1458]: time="2026-01-24T00:53:50.901764341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm2mm,Uid:c410542c-91db-42c6-ab23-ba7bc99f48a8,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:50.985316 containerd[1458]: time="2026-01-24T00:53:50.985181117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lb9s7,Uid:ae435103-d62d-485c-87e4-d7e0acde734f,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:53:51.049575 containerd[1458]: time="2026-01-24T00:53:51.049041964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:51.049575 containerd[1458]: time="2026-01-24T00:53:51.049160813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:51.049575 containerd[1458]: time="2026-01-24T00:53:51.049192150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.049575 containerd[1458]: time="2026-01-24T00:53:51.049339110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.051745 containerd[1458]: time="2026-01-24T00:53:51.051464836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:51.051745 containerd[1458]: time="2026-01-24T00:53:51.051594043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:51.051745 containerd[1458]: time="2026-01-24T00:53:51.051628577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.053022 containerd[1458]: time="2026-01-24T00:53:51.051769746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.088710 systemd[1]: Started cri-containerd-7b02908fcd1213ff44c05e293c039a4f4476705fc54c91e8346dca8a6e7519c3.scope - libcontainer container 7b02908fcd1213ff44c05e293c039a4f4476705fc54c91e8346dca8a6e7519c3. Jan 24 00:53:51.093338 systemd[1]: Started cri-containerd-310795e6b04d45a108ad4490365d32ba2aeba4ee56c56d22cd3a15a19eba2668.scope - libcontainer container 310795e6b04d45a108ad4490365d32ba2aeba4ee56c56d22cd3a15a19eba2668. Jan 24 00:53:51.127483 containerd[1458]: time="2026-01-24T00:53:51.127264961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm2mm,Uid:c410542c-91db-42c6-ab23-ba7bc99f48a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b02908fcd1213ff44c05e293c039a4f4476705fc54c91e8346dca8a6e7519c3\"" Jan 24 00:53:51.129093 kubelet[2497]: E0124 00:53:51.128791 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:51.142273 containerd[1458]: time="2026-01-24T00:53:51.142177327Z" level=info msg="CreateContainer within sandbox \"7b02908fcd1213ff44c05e293c039a4f4476705fc54c91e8346dca8a6e7519c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:53:51.149892 containerd[1458]: time="2026-01-24T00:53:51.149770747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lb9s7,Uid:ae435103-d62d-485c-87e4-d7e0acde734f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"310795e6b04d45a108ad4490365d32ba2aeba4ee56c56d22cd3a15a19eba2668\"" Jan 24 00:53:51.153312 containerd[1458]: time="2026-01-24T00:53:51.152684429Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:53:51.169214 containerd[1458]: time="2026-01-24T00:53:51.169151762Z" level=info msg="CreateContainer within sandbox \"7b02908fcd1213ff44c05e293c039a4f4476705fc54c91e8346dca8a6e7519c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63e304aa6d976938141f2cc772050645115870f0338da6f10fed2bc804c5badc\"" Jan 24 00:53:51.170821 containerd[1458]: time="2026-01-24T00:53:51.170604746Z" level=info msg="StartContainer for \"63e304aa6d976938141f2cc772050645115870f0338da6f10fed2bc804c5badc\"" Jan 24 00:53:51.214268 systemd[1]: Started cri-containerd-63e304aa6d976938141f2cc772050645115870f0338da6f10fed2bc804c5badc.scope - libcontainer container 63e304aa6d976938141f2cc772050645115870f0338da6f10fed2bc804c5badc. Jan 24 00:53:51.260215 containerd[1458]: time="2026-01-24T00:53:51.260121119Z" level=info msg="StartContainer for \"63e304aa6d976938141f2cc772050645115870f0338da6f10fed2bc804c5badc\" returns successfully" Jan 24 00:53:51.367449 kubelet[2497]: E0124 00:53:51.367324 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.199145 kubelet[2497]: E0124 00:53:52.198562 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.221587 kubelet[2497]: I0124 00:53:52.221269 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rm2mm" podStartSLOduration=2.221245956 podStartE2EDuration="2.221245956s" podCreationTimestamp="2026-01-24 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:51.399585089 +0000 UTC m=+7.213871235" watchObservedRunningTime="2026-01-24 00:53:52.221245956 +0000 UTC m=+8.035532112" Jan 24 00:53:52.368802 kubelet[2497]: E0124 00:53:52.368655 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.718368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473113874.mount: Deactivated successfully. Jan 24 00:53:53.868172 containerd[1458]: time="2026-01-24T00:53:53.868067610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:53.869019 containerd[1458]: time="2026-01-24T00:53:53.868904659Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:53:53.870511 containerd[1458]: time="2026-01-24T00:53:53.870443831Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:53.874456 containerd[1458]: time="2026-01-24T00:53:53.874407185Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:53.875425 containerd[1458]: time="2026-01-24T00:53:53.875385218Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.722627144s" Jan 24 00:53:53.875474 containerd[1458]: time="2026-01-24T00:53:53.875432095Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:53:53.891052 containerd[1458]: time="2026-01-24T00:53:53.890834786Z" level=info msg="CreateContainer within sandbox \"310795e6b04d45a108ad4490365d32ba2aeba4ee56c56d22cd3a15a19eba2668\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:53:53.913590 containerd[1458]: time="2026-01-24T00:53:53.913513741Z" level=info msg="CreateContainer within sandbox \"310795e6b04d45a108ad4490365d32ba2aeba4ee56c56d22cd3a15a19eba2668\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"08d2d139441e1c5a2aa6ae1e90de0e895abd98863155ac8738c4ca645d59c59c\"" Jan 24 00:53:53.919377 containerd[1458]: time="2026-01-24T00:53:53.914593462Z" level=info msg="StartContainer for \"08d2d139441e1c5a2aa6ae1e90de0e895abd98863155ac8738c4ca645d59c59c\"" Jan 24 00:53:53.981300 systemd[1]: Started cri-containerd-08d2d139441e1c5a2aa6ae1e90de0e895abd98863155ac8738c4ca645d59c59c.scope - libcontainer container 08d2d139441e1c5a2aa6ae1e90de0e895abd98863155ac8738c4ca645d59c59c. Jan 24 00:53:54.056085 containerd[1458]: time="2026-01-24T00:53:54.054651051Z" level=info msg="StartContainer for \"08d2d139441e1c5a2aa6ae1e90de0e895abd98863155ac8738c4ca645d59c59c\" returns successfully" Jan 24 00:53:54.390132 kubelet[2497]: I0124 00:53:54.389905 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-lb9s7" podStartSLOduration=1.66571159 podStartE2EDuration="4.389888555s" podCreationTimestamp="2026-01-24 00:53:50 +0000 UTC" firstStartedPulling="2026-01-24 00:53:51.152305972 +0000 UTC m=+6.966592108" lastFinishedPulling="2026-01-24 00:53:53.876482927 +0000 UTC m=+9.690769073" observedRunningTime="2026-01-24 00:53:54.389828487 +0000 UTC m=+10.204114623" watchObservedRunningTime="2026-01-24 00:53:54.389888555 +0000 UTC m=+10.204174701" Jan 24 00:53:56.338517 kubelet[2497]: E0124 00:53:56.337730 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:57.976184 kubelet[2497]: E0124 00:53:57.976128 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:58.385790 kubelet[2497]: E0124 00:53:58.385679 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:00.135919 sudo[1632]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:00.140652 sshd[1629]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:00.145661 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:57578.service: Deactivated successfully. Jan 24 00:54:00.151156 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:54:00.151522 systemd[1]: session-7.scope: Consumed 7.145s CPU time, 161.6M memory peak, 0B memory swap peak. Jan 24 00:54:00.155306 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:54:00.159475 systemd-logind[1439]: Removed session 7. Jan 24 00:54:01.357389 update_engine[1443]: I20260124 00:54:01.357077 1443 update_attempter.cc:509] Updating boot flags... Jan 24 00:54:01.435603 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2920) Jan 24 00:54:04.620631 systemd[1]: Created slice kubepods-besteffort-pod89132715_2239_45da_82d4_0c1615e098da.slice - libcontainer container kubepods-besteffort-pod89132715_2239_45da_82d4_0c1615e098da.slice. Jan 24 00:54:04.655055 kubelet[2497]: I0124 00:54:04.655001 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/89132715-2239-45da-82d4-0c1615e098da-typha-certs\") pod \"calico-typha-756cc7fcd6-b7hsw\" (UID: \"89132715-2239-45da-82d4-0c1615e098da\") " pod="calico-system/calico-typha-756cc7fcd6-b7hsw" Jan 24 00:54:04.655055 kubelet[2497]: I0124 00:54:04.655064 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89132715-2239-45da-82d4-0c1615e098da-tigera-ca-bundle\") pod \"calico-typha-756cc7fcd6-b7hsw\" (UID: \"89132715-2239-45da-82d4-0c1615e098da\") " pod="calico-system/calico-typha-756cc7fcd6-b7hsw" Jan 24 00:54:04.655055 kubelet[2497]: I0124 00:54:04.655084 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmz94\" (UniqueName: \"kubernetes.io/projected/89132715-2239-45da-82d4-0c1615e098da-kube-api-access-hmz94\") pod \"calico-typha-756cc7fcd6-b7hsw\" (UID: \"89132715-2239-45da-82d4-0c1615e098da\") " pod="calico-system/calico-typha-756cc7fcd6-b7hsw" Jan 24 00:54:04.808321 systemd[1]: Created slice kubepods-besteffort-pod7abf6172_01d8_47bb_bc9c_34695c936b2b.slice - libcontainer container kubepods-besteffort-pod7abf6172_01d8_47bb_bc9c_34695c936b2b.slice. Jan 24 00:54:04.857711 kubelet[2497]: I0124 00:54:04.857117 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-cni-net-dir\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.857711 kubelet[2497]: I0124 00:54:04.857154 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-var-run-calico\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.857711 kubelet[2497]: I0124 00:54:04.857170 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-xtables-lock\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.857711 kubelet[2497]: I0124 00:54:04.857185 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4z2\" (UniqueName: \"kubernetes.io/projected/7abf6172-01d8-47bb-bc9c-34695c936b2b-kube-api-access-2z4z2\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.857711 kubelet[2497]: I0124 00:54:04.857201 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-flexvol-driver-host\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858119 kubelet[2497]: I0124 00:54:04.857213 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-lib-modules\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858119 kubelet[2497]: I0124 00:54:04.857229 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7abf6172-01d8-47bb-bc9c-34695c936b2b-node-certs\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858119 kubelet[2497]: I0124 00:54:04.857241 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-var-lib-calico\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858119 kubelet[2497]: I0124 00:54:04.857255 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-cni-bin-dir\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858119 kubelet[2497]: I0124 00:54:04.857268 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7abf6172-01d8-47bb-bc9c-34695c936b2b-tigera-ca-bundle\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858226 kubelet[2497]: I0124 00:54:04.857300 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-cni-log-dir\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.858226 kubelet[2497]: I0124 00:54:04.857483 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7abf6172-01d8-47bb-bc9c-34695c936b2b-policysync\") pod \"calico-node-wt5qq\" (UID: \"7abf6172-01d8-47bb-bc9c-34695c936b2b\") " pod="calico-system/calico-node-wt5qq" Jan 24 00:54:04.931114 kubelet[2497]: E0124 00:54:04.930893 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:04.932977 containerd[1458]: time="2026-01-24T00:54:04.931729381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-756cc7fcd6-b7hsw,Uid:89132715-2239-45da-82d4-0c1615e098da,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:04.960633 kubelet[2497]: E0124 00:54:04.960607 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:04.960795 kubelet[2497]: W0124 00:54:04.960752 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:04.960795 kubelet[2497]: E0124 00:54:04.960773 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:04.963235 kubelet[2497]: E0124 00:54:04.963171 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:04.963235 kubelet[2497]: W0124 00:54:04.963232 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:04.963309 kubelet[2497]: E0124 00:54:04.963254 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:04.968985 containerd[1458]: time="2026-01-24T00:54:04.967001276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:04.968985 containerd[1458]: time="2026-01-24T00:54:04.967802032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:04.968985 containerd[1458]: time="2026-01-24T00:54:04.968323068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:04.969289 containerd[1458]: time="2026-01-24T00:54:04.968435297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:04.973564 kubelet[2497]: E0124 00:54:04.973533 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:04.973672 kubelet[2497]: W0124 00:54:04.973650 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:04.973743 kubelet[2497]: E0124 00:54:04.973730 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:04.977563 kubelet[2497]: E0124 00:54:04.977549 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:04.977636 kubelet[2497]: W0124 00:54:04.977623 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:04.977699 kubelet[2497]: E0124 00:54:04.977688 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.004401 kubelet[2497]: E0124 00:54:05.003546 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:05.017290 systemd[1]: Started cri-containerd-e13b6e2d0a1b2a8ff9456c43f5451d2bf6e5abe3f8c18d83397a4e521f58c748.scope - libcontainer container e13b6e2d0a1b2a8ff9456c43f5451d2bf6e5abe3f8c18d83397a4e521f58c748. Jan 24 00:54:05.038897 kubelet[2497]: E0124 00:54:05.038833 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.039201 kubelet[2497]: W0124 00:54:05.039059 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.039201 kubelet[2497]: E0124 00:54:05.039084 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.039437 kubelet[2497]: E0124 00:54:05.039425 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.039532 kubelet[2497]: W0124 00:54:05.039480 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.039532 kubelet[2497]: E0124 00:54:05.039492 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.040113 kubelet[2497]: E0124 00:54:05.039853 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.040113 kubelet[2497]: W0124 00:54:05.039898 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.040113 kubelet[2497]: E0124 00:54:05.039908 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.040367 kubelet[2497]: E0124 00:54:05.040356 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.040416 kubelet[2497]: W0124 00:54:05.040406 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.040462 kubelet[2497]: E0124 00:54:05.040453 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.040798 kubelet[2497]: E0124 00:54:05.040786 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.040906 kubelet[2497]: W0124 00:54:05.040851 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.041007 kubelet[2497]: E0124 00:54:05.040996 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.041377 kubelet[2497]: E0124 00:54:05.041366 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.041427 kubelet[2497]: W0124 00:54:05.041418 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.041472 kubelet[2497]: E0124 00:54:05.041463 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.042069 kubelet[2497]: E0124 00:54:05.041811 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.042069 kubelet[2497]: W0124 00:54:05.041821 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.042069 kubelet[2497]: E0124 00:54:05.041831 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.042226 kubelet[2497]: E0124 00:54:05.042215 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.042273 kubelet[2497]: W0124 00:54:05.042263 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.042312 kubelet[2497]: E0124 00:54:05.042303 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.042660 kubelet[2497]: E0124 00:54:05.042649 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.042722 kubelet[2497]: W0124 00:54:05.042711 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.042762 kubelet[2497]: E0124 00:54:05.042754 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.043195 kubelet[2497]: E0124 00:54:05.043184 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.043296 kubelet[2497]: W0124 00:54:05.043243 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.043296 kubelet[2497]: E0124 00:54:05.043257 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.043762 kubelet[2497]: E0124 00:54:05.043751 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.043908 kubelet[2497]: W0124 00:54:05.043809 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.043908 kubelet[2497]: E0124 00:54:05.043821 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.044401 kubelet[2497]: E0124 00:54:05.044338 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.044401 kubelet[2497]: W0124 00:54:05.044349 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.044401 kubelet[2497]: E0124 00:54:05.044357 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.044766 kubelet[2497]: E0124 00:54:05.044756 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.044913 kubelet[2497]: W0124 00:54:05.044818 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.044913 kubelet[2497]: E0124 00:54:05.044830 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.045447 kubelet[2497]: E0124 00:54:05.045334 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.045447 kubelet[2497]: W0124 00:54:05.045344 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.045447 kubelet[2497]: E0124 00:54:05.045352 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.045635 kubelet[2497]: E0124 00:54:05.045624 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.045691 kubelet[2497]: W0124 00:54:05.045681 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.045736 kubelet[2497]: E0124 00:54:05.045726 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.047083 kubelet[2497]: E0124 00:54:05.047025 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.047150 kubelet[2497]: W0124 00:54:05.047139 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.047190 kubelet[2497]: E0124 00:54:05.047181 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.047688 kubelet[2497]: E0124 00:54:05.047594 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.047688 kubelet[2497]: W0124 00:54:05.047605 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.047688 kubelet[2497]: E0124 00:54:05.047614 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.048306 kubelet[2497]: E0124 00:54:05.048265 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.048588 kubelet[2497]: W0124 00:54:05.048438 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.048588 kubelet[2497]: E0124 00:54:05.048452 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.049592 kubelet[2497]: E0124 00:54:05.049450 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.049592 kubelet[2497]: W0124 00:54:05.049464 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.049592 kubelet[2497]: E0124 00:54:05.049474 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.049777 kubelet[2497]: E0124 00:54:05.049764 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.049824 kubelet[2497]: W0124 00:54:05.049814 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.049916 kubelet[2497]: E0124 00:54:05.049904 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.060415 kubelet[2497]: E0124 00:54:05.059121 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.060415 kubelet[2497]: W0124 00:54:05.059139 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.060415 kubelet[2497]: E0124 00:54:05.059154 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.060415 kubelet[2497]: I0124 00:54:05.059180 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa4fe547-535b-479c-9c29-60c4ee40c975-kubelet-dir\") pod \"csi-node-driver-cmnzx\" (UID: \"fa4fe547-535b-479c-9c29-60c4ee40c975\") " pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:05.060415 kubelet[2497]: E0124 00:54:05.059486 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.060415 kubelet[2497]: W0124 00:54:05.059495 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.060415 kubelet[2497]: E0124 00:54:05.059505 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.060415 kubelet[2497]: I0124 00:54:05.059554 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fa4fe547-535b-479c-9c29-60c4ee40c975-varrun\") pod \"csi-node-driver-cmnzx\" (UID: \"fa4fe547-535b-479c-9c29-60c4ee40c975\") " pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:05.060415 kubelet[2497]: E0124 00:54:05.060210 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.060696 containerd[1458]: time="2026-01-24T00:54:05.060149204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-756cc7fcd6-b7hsw,Uid:89132715-2239-45da-82d4-0c1615e098da,Namespace:calico-system,Attempt:0,} returns sandbox id \"e13b6e2d0a1b2a8ff9456c43f5451d2bf6e5abe3f8c18d83397a4e521f58c748\"" Jan 24 00:54:05.060731 kubelet[2497]: W0124 00:54:05.060219 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.060731 kubelet[2497]: E0124 00:54:05.060229 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.060731 kubelet[2497]: I0124 00:54:05.060340 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa4fe547-535b-479c-9c29-60c4ee40c975-socket-dir\") pod \"csi-node-driver-cmnzx\" (UID: \"fa4fe547-535b-479c-9c29-60c4ee40c975\") " pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:05.061031 kubelet[2497]: E0124 00:54:05.061010 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.061031 kubelet[2497]: W0124 00:54:05.061022 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.061031 kubelet[2497]: E0124 00:54:05.061031 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.061256 kubelet[2497]: E0124 00:54:05.061183 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:05.061803 kubelet[2497]: E0124 00:54:05.061756 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.061803 kubelet[2497]: W0124 00:54:05.061791 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.061803 kubelet[2497]: E0124 00:54:05.061801 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.062614 kubelet[2497]: E0124 00:54:05.062547 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.062614 kubelet[2497]: W0124 00:54:05.062561 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.062614 kubelet[2497]: E0124 00:54:05.062570 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.063145 kubelet[2497]: E0124 00:54:05.062844 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.063145 kubelet[2497]: W0124 00:54:05.062852 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.063145 kubelet[2497]: E0124 00:54:05.062902 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.063266 kubelet[2497]: E0124 00:54:05.063239 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.063266 kubelet[2497]: W0124 00:54:05.063248 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.063266 kubelet[2497]: E0124 00:54:05.063256 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.063653 kubelet[2497]: I0124 00:54:05.063433 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdll2\" (UniqueName: \"kubernetes.io/projected/fa4fe547-535b-479c-9c29-60c4ee40c975-kube-api-access-gdll2\") pod \"csi-node-driver-cmnzx\" (UID: \"fa4fe547-535b-479c-9c29-60c4ee40c975\") " pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:05.063653 kubelet[2497]: E0124 00:54:05.063504 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.063653 kubelet[2497]: W0124 00:54:05.063512 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.063653 kubelet[2497]: E0124 00:54:05.063520 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.064139 kubelet[2497]: E0124 00:54:05.063980 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.064139 kubelet[2497]: W0124 00:54:05.063995 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.064139 kubelet[2497]: E0124 00:54:05.064006 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.064557 containerd[1458]: time="2026-01-24T00:54:05.064348930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:54:05.064615 kubelet[2497]: E0124 00:54:05.064558 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.064615 kubelet[2497]: W0124 00:54:05.064568 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.064615 kubelet[2497]: E0124 00:54:05.064577 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.065462 kubelet[2497]: E0124 00:54:05.065083 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.065462 kubelet[2497]: W0124 00:54:05.065192 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.065462 kubelet[2497]: E0124 00:54:05.065331 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.066053 kubelet[2497]: E0124 00:54:05.065897 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.066053 kubelet[2497]: W0124 00:54:05.065973 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.066053 kubelet[2497]: E0124 00:54:05.065983 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.066053 kubelet[2497]: I0124 00:54:05.065996 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa4fe547-535b-479c-9c29-60c4ee40c975-registration-dir\") pod \"csi-node-driver-cmnzx\" (UID: \"fa4fe547-535b-479c-9c29-60c4ee40c975\") " pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:05.066837 kubelet[2497]: E0124 00:54:05.066794 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.066917 kubelet[2497]: W0124 00:54:05.066886 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.066917 kubelet[2497]: E0124 00:54:05.066900 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.067269 kubelet[2497]: E0124 00:54:05.067234 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.067269 kubelet[2497]: W0124 00:54:05.067263 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.067269 kubelet[2497]: E0124 00:54:05.067272 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.114991 kubelet[2497]: E0124 00:54:05.114723 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:05.115503 containerd[1458]: time="2026-01-24T00:54:05.115373547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wt5qq,Uid:7abf6172-01d8-47bb-bc9c-34695c936b2b,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:05.148792 containerd[1458]: time="2026-01-24T00:54:05.148688423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:05.148792 containerd[1458]: time="2026-01-24T00:54:05.148755568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:05.148792 containerd[1458]: time="2026-01-24T00:54:05.148765947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:05.150028 containerd[1458]: time="2026-01-24T00:54:05.148910416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:05.170814 kubelet[2497]: E0124 00:54:05.170788 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.170814 kubelet[2497]: W0124 00:54:05.170807 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.171005 kubelet[2497]: E0124 00:54:05.170824 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.171326 kubelet[2497]: E0124 00:54:05.171295 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.171326 kubelet[2497]: W0124 00:54:05.171323 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.171469 kubelet[2497]: E0124 00:54:05.171333 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.172276 kubelet[2497]: E0124 00:54:05.172004 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.172276 kubelet[2497]: W0124 00:54:05.172016 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.172276 kubelet[2497]: E0124 00:54:05.172025 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.172697 kubelet[2497]: E0124 00:54:05.172680 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.172697 kubelet[2497]: W0124 00:54:05.172691 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.172763 kubelet[2497]: E0124 00:54:05.172701 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.173470 kubelet[2497]: E0124 00:54:05.173381 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.173470 kubelet[2497]: W0124 00:54:05.173391 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.173470 kubelet[2497]: E0124 00:54:05.173400 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.173744 kubelet[2497]: E0124 00:54:05.173701 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.173744 kubelet[2497]: W0124 00:54:05.173724 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.173744 kubelet[2497]: E0124 00:54:05.173741 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.174227 kubelet[2497]: E0124 00:54:05.174152 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.174227 kubelet[2497]: W0124 00:54:05.174181 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.174227 kubelet[2497]: E0124 00:54:05.174192 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.174488 kubelet[2497]: E0124 00:54:05.174461 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.174488 kubelet[2497]: W0124 00:54:05.174486 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.174488 kubelet[2497]: E0124 00:54:05.174496 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.174891 kubelet[2497]: E0124 00:54:05.174846 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.174891 kubelet[2497]: W0124 00:54:05.174889 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.175025 kubelet[2497]: E0124 00:54:05.174899 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.175360 kubelet[2497]: E0124 00:54:05.175272 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.175360 kubelet[2497]: W0124 00:54:05.175298 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.175360 kubelet[2497]: E0124 00:54:05.175314 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.175920 kubelet[2497]: E0124 00:54:05.175897 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.175920 kubelet[2497]: W0124 00:54:05.175910 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.175920 kubelet[2497]: E0124 00:54:05.175918 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.176313 kubelet[2497]: E0124 00:54:05.176259 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.176313 kubelet[2497]: W0124 00:54:05.176294 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.176313 kubelet[2497]: E0124 00:54:05.176302 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.176363 systemd[1]: Started cri-containerd-1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef.scope - libcontainer container 1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef. Jan 24 00:54:05.176678 kubelet[2497]: E0124 00:54:05.176626 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.176678 kubelet[2497]: W0124 00:54:05.176639 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.176678 kubelet[2497]: E0124 00:54:05.176647 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.177245 kubelet[2497]: E0124 00:54:05.177032 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.177245 kubelet[2497]: W0124 00:54:05.177041 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.177245 kubelet[2497]: E0124 00:54:05.177049 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.177357 kubelet[2497]: E0124 00:54:05.177299 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.177357 kubelet[2497]: W0124 00:54:05.177308 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.177357 kubelet[2497]: E0124 00:54:05.177316 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.177694 kubelet[2497]: E0124 00:54:05.177590 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.177694 kubelet[2497]: W0124 00:54:05.177618 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.177694 kubelet[2497]: E0124 00:54:05.177626 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.178151 kubelet[2497]: E0124 00:54:05.178080 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.178151 kubelet[2497]: W0124 00:54:05.178092 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.178151 kubelet[2497]: E0124 00:54:05.178101 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.178615 kubelet[2497]: E0124 00:54:05.178340 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.178615 kubelet[2497]: W0124 00:54:05.178348 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.178615 kubelet[2497]: E0124 00:54:05.178357 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.179125 kubelet[2497]: E0124 00:54:05.179016 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.179125 kubelet[2497]: W0124 00:54:05.179050 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.179125 kubelet[2497]: E0124 00:54:05.179059 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.179398 kubelet[2497]: E0124 00:54:05.179366 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.179398 kubelet[2497]: W0124 00:54:05.179398 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.179398 kubelet[2497]: E0124 00:54:05.179407 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.180079 kubelet[2497]: E0124 00:54:05.180049 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.180079 kubelet[2497]: W0124 00:54:05.180080 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.180170 kubelet[2497]: E0124 00:54:05.180089 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.181736 kubelet[2497]: E0124 00:54:05.181311 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.181736 kubelet[2497]: W0124 00:54:05.181325 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.181736 kubelet[2497]: E0124 00:54:05.181338 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.181912 kubelet[2497]: E0124 00:54:05.181738 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.181912 kubelet[2497]: W0124 00:54:05.181818 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.181912 kubelet[2497]: E0124 00:54:05.181829 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.182545 kubelet[2497]: E0124 00:54:05.182403 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.182545 kubelet[2497]: W0124 00:54:05.182416 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.182545 kubelet[2497]: E0124 00:54:05.182427 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.183659 kubelet[2497]: E0124 00:54:05.183609 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.183812 kubelet[2497]: W0124 00:54:05.183758 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.183812 kubelet[2497]: E0124 00:54:05.183771 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.196276 kubelet[2497]: E0124 00:54:05.196186 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:05.196276 kubelet[2497]: W0124 00:54:05.196201 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:05.196276 kubelet[2497]: E0124 00:54:05.196214 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:05.215319 containerd[1458]: time="2026-01-24T00:54:05.215284643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wt5qq,Uid:7abf6172-01d8-47bb-bc9c-34695c936b2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\"" Jan 24 00:54:05.217016 kubelet[2497]: E0124 00:54:05.216925 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:06.160986 containerd[1458]: time="2026-01-24T00:54:06.160797746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:06.161641 containerd[1458]: time="2026-01-24T00:54:06.161580927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:54:06.163261 containerd[1458]: time="2026-01-24T00:54:06.163189624Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:06.166530 containerd[1458]: time="2026-01-24T00:54:06.166445749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:06.167888 containerd[1458]: time="2026-01-24T00:54:06.167778668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.103397709s" Jan 24 00:54:06.167976 containerd[1458]: time="2026-01-24T00:54:06.167847266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:54:06.169464 containerd[1458]: time="2026-01-24T00:54:06.169374278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:54:06.184596 containerd[1458]: time="2026-01-24T00:54:06.184320071Z" level=info msg="CreateContainer within sandbox \"e13b6e2d0a1b2a8ff9456c43f5451d2bf6e5abe3f8c18d83397a4e521f58c748\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:54:06.204644 containerd[1458]: time="2026-01-24T00:54:06.204581046Z" level=info msg="CreateContainer within sandbox \"e13b6e2d0a1b2a8ff9456c43f5451d2bf6e5abe3f8c18d83397a4e521f58c748\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e51780f0215d02f71f515da11076322e730c7804289745120aadc0cab9cb6d3\"" Jan 24 00:54:06.206910 containerd[1458]: time="2026-01-24T00:54:06.206786280Z" level=info msg="StartContainer for \"1e51780f0215d02f71f515da11076322e730c7804289745120aadc0cab9cb6d3\"" Jan 24 00:54:06.242143 systemd[1]: Started cri-containerd-1e51780f0215d02f71f515da11076322e730c7804289745120aadc0cab9cb6d3.scope - libcontainer container 1e51780f0215d02f71f515da11076322e730c7804289745120aadc0cab9cb6d3. Jan 24 00:54:06.298041 containerd[1458]: time="2026-01-24T00:54:06.297993010Z" level=info msg="StartContainer for \"1e51780f0215d02f71f515da11076322e730c7804289745120aadc0cab9cb6d3\" returns successfully" Jan 24 00:54:06.316624 kubelet[2497]: E0124 00:54:06.316520 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:06.411569 kubelet[2497]: E0124 00:54:06.411318 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:06.463117 kubelet[2497]: E0124 00:54:06.463054 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.463117 kubelet[2497]: W0124 00:54:06.463102 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.463117 kubelet[2497]: E0124 00:54:06.463124 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.463449 kubelet[2497]: E0124 00:54:06.463416 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.463449 kubelet[2497]: W0124 00:54:06.463446 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.463516 kubelet[2497]: E0124 00:54:06.463460 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.463800 kubelet[2497]: E0124 00:54:06.463768 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.463800 kubelet[2497]: W0124 00:54:06.463798 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.463893 kubelet[2497]: E0124 00:54:06.463809 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.464244 kubelet[2497]: E0124 00:54:06.464212 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.464244 kubelet[2497]: W0124 00:54:06.464242 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.464308 kubelet[2497]: E0124 00:54:06.464252 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.464640 kubelet[2497]: E0124 00:54:06.464609 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.464640 kubelet[2497]: W0124 00:54:06.464640 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.464750 kubelet[2497]: E0124 00:54:06.464648 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.464998 kubelet[2497]: E0124 00:54:06.464918 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.464998 kubelet[2497]: W0124 00:54:06.464997 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.465058 kubelet[2497]: E0124 00:54:06.465006 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.466038 kubelet[2497]: E0124 00:54:06.465986 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.466038 kubelet[2497]: W0124 00:54:06.466018 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.466100 kubelet[2497]: E0124 00:54:06.466027 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.466584 kubelet[2497]: E0124 00:54:06.466443 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.466584 kubelet[2497]: W0124 00:54:06.466456 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.466584 kubelet[2497]: E0124 00:54:06.466465 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.468434 kubelet[2497]: E0124 00:54:06.468330 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.468434 kubelet[2497]: W0124 00:54:06.468360 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.468434 kubelet[2497]: E0124 00:54:06.468371 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.468770 kubelet[2497]: E0124 00:54:06.468740 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.468770 kubelet[2497]: W0124 00:54:06.468770 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.468770 kubelet[2497]: E0124 00:54:06.468779 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.469183 kubelet[2497]: E0124 00:54:06.469080 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.469183 kubelet[2497]: W0124 00:54:06.469115 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.469183 kubelet[2497]: E0124 00:54:06.469123 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.469552 kubelet[2497]: E0124 00:54:06.469499 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.469552 kubelet[2497]: W0124 00:54:06.469533 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.469552 kubelet[2497]: E0124 00:54:06.469542 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.470744 kubelet[2497]: E0124 00:54:06.470643 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.470744 kubelet[2497]: W0124 00:54:06.470673 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.470744 kubelet[2497]: E0124 00:54:06.470682 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.471175 kubelet[2497]: E0124 00:54:06.471122 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.471175 kubelet[2497]: W0124 00:54:06.471155 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.471175 kubelet[2497]: E0124 00:54:06.471164 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.472392 kubelet[2497]: E0124 00:54:06.472297 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.472392 kubelet[2497]: W0124 00:54:06.472328 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.472392 kubelet[2497]: E0124 00:54:06.472338 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.484049 kubelet[2497]: E0124 00:54:06.484011 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.484049 kubelet[2497]: W0124 00:54:06.484046 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.484049 kubelet[2497]: E0124 00:54:06.484061 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.484543 kubelet[2497]: E0124 00:54:06.484514 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.484629 kubelet[2497]: W0124 00:54:06.484545 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.484629 kubelet[2497]: E0124 00:54:06.484556 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.487000 kubelet[2497]: E0124 00:54:06.486915 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.487000 kubelet[2497]: W0124 00:54:06.486997 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.487069 kubelet[2497]: E0124 00:54:06.487010 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.487649 kubelet[2497]: E0124 00:54:06.487619 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.487649 kubelet[2497]: W0124 00:54:06.487649 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.487700 kubelet[2497]: E0124 00:54:06.487659 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.490400 kubelet[2497]: E0124 00:54:06.490347 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.490400 kubelet[2497]: W0124 00:54:06.490384 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.490400 kubelet[2497]: E0124 00:54:06.490395 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.490908 kubelet[2497]: E0124 00:54:06.490819 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.490908 kubelet[2497]: W0124 00:54:06.490852 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.490908 kubelet[2497]: E0124 00:54:06.490893 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.491325 kubelet[2497]: E0124 00:54:06.491294 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.491325 kubelet[2497]: W0124 00:54:06.491324 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.491381 kubelet[2497]: E0124 00:54:06.491333 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.491595 kubelet[2497]: E0124 00:54:06.491563 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.491595 kubelet[2497]: W0124 00:54:06.491593 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.491739 kubelet[2497]: E0124 00:54:06.491603 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.492284 kubelet[2497]: E0124 00:54:06.492252 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.492284 kubelet[2497]: W0124 00:54:06.492282 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.492346 kubelet[2497]: E0124 00:54:06.492292 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.493402 kubelet[2497]: E0124 00:54:06.493354 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.493402 kubelet[2497]: W0124 00:54:06.493390 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.493402 kubelet[2497]: E0124 00:54:06.493400 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.493818 kubelet[2497]: E0124 00:54:06.493786 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.493818 kubelet[2497]: W0124 00:54:06.493816 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.493918 kubelet[2497]: E0124 00:54:06.493825 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.495246 kubelet[2497]: E0124 00:54:06.495214 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.495246 kubelet[2497]: W0124 00:54:06.495244 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.495304 kubelet[2497]: E0124 00:54:06.495256 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.495806 kubelet[2497]: E0124 00:54:06.495756 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.495806 kubelet[2497]: W0124 00:54:06.495792 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.495806 kubelet[2497]: E0124 00:54:06.495802 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.496359 kubelet[2497]: E0124 00:54:06.496320 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.496359 kubelet[2497]: W0124 00:54:06.496351 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.496359 kubelet[2497]: E0124 00:54:06.496361 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.497733 kubelet[2497]: E0124 00:54:06.497674 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.497920 kubelet[2497]: W0124 00:54:06.497846 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.497920 kubelet[2497]: E0124 00:54:06.497907 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.498629 kubelet[2497]: E0124 00:54:06.498543 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.498629 kubelet[2497]: W0124 00:54:06.498572 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.498629 kubelet[2497]: E0124 00:54:06.498581 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.499907 kubelet[2497]: E0124 00:54:06.499833 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.499907 kubelet[2497]: W0124 00:54:06.499904 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.500056 kubelet[2497]: E0124 00:54:06.499915 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.501072 kubelet[2497]: E0124 00:54:06.501030 2497 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:06.501072 kubelet[2497]: W0124 00:54:06.501065 2497 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:06.501072 kubelet[2497]: E0124 00:54:06.501075 2497 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:06.654153 containerd[1458]: time="2026-01-24T00:54:06.653838166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:06.655180 containerd[1458]: time="2026-01-24T00:54:06.655038484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:54:06.656403 containerd[1458]: time="2026-01-24T00:54:06.656361140Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:06.659309 containerd[1458]: time="2026-01-24T00:54:06.659246716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:06.660046 containerd[1458]: time="2026-01-24T00:54:06.659912534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 490.471733ms" Jan 24 00:54:06.660710 containerd[1458]: time="2026-01-24T00:54:06.660323126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:54:06.668100 containerd[1458]: time="2026-01-24T00:54:06.666828408Z" level=info msg="CreateContainer within sandbox \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:54:06.683337 containerd[1458]: time="2026-01-24T00:54:06.683288506Z" level=info msg="CreateContainer within sandbox \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2\"" Jan 24 00:54:06.683919 containerd[1458]: time="2026-01-24T00:54:06.683842108Z" level=info msg="StartContainer for \"749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2\"" Jan 24 00:54:06.730144 systemd[1]: Started cri-containerd-749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2.scope - libcontainer container 749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2. Jan 24 00:54:06.778149 systemd[1]: cri-containerd-749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2.scope: Deactivated successfully. Jan 24 00:54:06.788036 containerd[1458]: time="2026-01-24T00:54:06.787979207Z" level=info msg="StartContainer for \"749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2\" returns successfully" Jan 24 00:54:06.817386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2-rootfs.mount: Deactivated successfully. Jan 24 00:54:06.869290 containerd[1458]: time="2026-01-24T00:54:06.869075192Z" level=info msg="shim disconnected" id=749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2 namespace=k8s.io Jan 24 00:54:06.869290 containerd[1458]: time="2026-01-24T00:54:06.869212627Z" level=warning msg="cleaning up after shim disconnected" id=749d493c917e8b08cde29be7ef0e112b97e5503f4299e586aebd1b8191a511c2 namespace=k8s.io Jan 24 00:54:06.869290 containerd[1458]: time="2026-01-24T00:54:06.869222295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:07.415357 kubelet[2497]: I0124 00:54:07.415204 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:54:07.415807 kubelet[2497]: E0124 00:54:07.415612 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:07.415807 kubelet[2497]: E0124 00:54:07.415637 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:07.416825 containerd[1458]: time="2026-01-24T00:54:07.416770419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:54:07.433359 kubelet[2497]: I0124 00:54:07.433239 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-756cc7fcd6-b7hsw" podStartSLOduration=2.327889904 podStartE2EDuration="3.433224447s" podCreationTimestamp="2026-01-24 00:54:04 +0000 UTC" firstStartedPulling="2026-01-24 00:54:05.063849613 +0000 UTC m=+20.878135750" lastFinishedPulling="2026-01-24 00:54:06.169184147 +0000 UTC m=+21.983470293" observedRunningTime="2026-01-24 00:54:06.44020343 +0000 UTC m=+22.254489576" watchObservedRunningTime="2026-01-24 00:54:07.433224447 +0000 UTC m=+23.247510583" Jan 24 00:54:08.317507 kubelet[2497]: E0124 00:54:08.317417 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:09.090371 kubelet[2497]: I0124 00:54:09.090336 2497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:54:09.092011 kubelet[2497]: E0124 00:54:09.091301 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:09.263247 containerd[1458]: time="2026-01-24T00:54:09.263157477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:09.264277 containerd[1458]: time="2026-01-24T00:54:09.264201158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:54:09.265616 containerd[1458]: time="2026-01-24T00:54:09.265547169Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:09.268627 containerd[1458]: time="2026-01-24T00:54:09.268556099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:09.269294 containerd[1458]: time="2026-01-24T00:54:09.269194007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.852365041s" Jan 24 00:54:09.269294 containerd[1458]: time="2026-01-24T00:54:09.269240863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:54:09.275541 containerd[1458]: time="2026-01-24T00:54:09.275487897Z" level=info msg="CreateContainer within sandbox \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:54:09.295242 containerd[1458]: time="2026-01-24T00:54:09.295141358Z" level=info msg="CreateContainer within sandbox \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987\"" Jan 24 00:54:09.296354 containerd[1458]: time="2026-01-24T00:54:09.295910712Z" level=info msg="StartContainer for \"7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987\"" Jan 24 00:54:09.337322 systemd[1]: Started cri-containerd-7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987.scope - libcontainer container 7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987. Jan 24 00:54:09.376411 containerd[1458]: time="2026-01-24T00:54:09.375755212Z" level=info msg="StartContainer for \"7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987\" returns successfully" Jan 24 00:54:09.424004 kubelet[2497]: E0124 00:54:09.423904 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:09.424623 kubelet[2497]: E0124 00:54:09.424609 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:10.090080 systemd[1]: cri-containerd-7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987.scope: Deactivated successfully. Jan 24 00:54:10.117842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987-rootfs.mount: Deactivated successfully. Jan 24 00:54:10.124000 containerd[1458]: time="2026-01-24T00:54:10.123717180Z" level=info msg="shim disconnected" id=7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987 namespace=k8s.io Jan 24 00:54:10.124000 containerd[1458]: time="2026-01-24T00:54:10.123765650Z" level=warning msg="cleaning up after shim disconnected" id=7c00d8b813303dce08c342cbbbd40a00586cb524fba13ca8df45170c8beed987 namespace=k8s.io Jan 24 00:54:10.124000 containerd[1458]: time="2026-01-24T00:54:10.123774466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:10.131522 kubelet[2497]: I0124 00:54:10.131419 2497 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:54:10.192101 systemd[1]: Created slice kubepods-besteffort-pod0518115e_fdb4_4931_870f_91f50e670040.slice - libcontainer container kubepods-besteffort-pod0518115e_fdb4_4931_870f_91f50e670040.slice. Jan 24 00:54:10.203404 systemd[1]: Created slice kubepods-burstable-podaa4858bc_37d8_46bd_8eaf_fa6b9172ea0f.slice - libcontainer container kubepods-burstable-podaa4858bc_37d8_46bd_8eaf_fa6b9172ea0f.slice. Jan 24 00:54:10.210052 systemd[1]: Created slice kubepods-besteffort-pod80bbd4c4_8103_4c2d_b518_8fb02d9e29a2.slice - libcontainer container kubepods-besteffort-pod80bbd4c4_8103_4c2d_b518_8fb02d9e29a2.slice. Jan 24 00:54:10.219724 systemd[1]: Created slice kubepods-besteffort-pod2c115e27_4279_4a97_b3f2_127b5b368e0a.slice - libcontainer container kubepods-besteffort-pod2c115e27_4279_4a97_b3f2_127b5b368e0a.slice. Jan 24 00:54:10.222062 kubelet[2497]: I0124 00:54:10.221915 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f-config-volume\") pod \"coredns-66bc5c9577-x44ds\" (UID: \"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f\") " pod="kube-system/coredns-66bc5c9577-x44ds" Jan 24 00:54:10.223656 kubelet[2497]: I0124 00:54:10.222140 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6whf\" (UniqueName: \"kubernetes.io/projected/aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f-kube-api-access-z6whf\") pod \"coredns-66bc5c9577-x44ds\" (UID: \"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f\") " pod="kube-system/coredns-66bc5c9577-x44ds" Jan 24 00:54:10.223656 kubelet[2497]: I0124 00:54:10.223129 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c115e27-4279-4a97-b3f2-127b5b368e0a-calico-apiserver-certs\") pod \"calico-apiserver-5f99fd9549-84j78\" (UID: \"2c115e27-4279-4a97-b3f2-127b5b368e0a\") " pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" Jan 24 00:54:10.223656 kubelet[2497]: I0124 00:54:10.223157 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tnb\" (UniqueName: \"kubernetes.io/projected/2c115e27-4279-4a97-b3f2-127b5b368e0a-kube-api-access-98tnb\") pod \"calico-apiserver-5f99fd9549-84j78\" (UID: \"2c115e27-4279-4a97-b3f2-127b5b368e0a\") " pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" Jan 24 00:54:10.223656 kubelet[2497]: I0124 00:54:10.223172 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96853c29-f6ae-4323-a4cd-7dc7ec0a8a17-calico-apiserver-certs\") pod \"calico-apiserver-5f99fd9549-l6xtt\" (UID: \"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17\") " pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" Jan 24 00:54:10.223656 kubelet[2497]: I0124 00:54:10.223193 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0518115e-fdb4-4931-870f-91f50e670040-whisker-ca-bundle\") pod \"whisker-56d98d76cc-sj5nk\" (UID: \"0518115e-fdb4-4931-870f-91f50e670040\") " pod="calico-system/whisker-56d98d76cc-sj5nk" Jan 24 00:54:10.224028 kubelet[2497]: I0124 00:54:10.223208 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549ae7b9-2710-43b2-acf2-03007d90bb7e-config\") pod \"goldmane-7c778bb748-8plzs\" (UID: \"549ae7b9-2710-43b2-acf2-03007d90bb7e\") " pod="calico-system/goldmane-7c778bb748-8plzs" Jan 24 00:54:10.224028 kubelet[2497]: I0124 00:54:10.223228 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqp7q\" (UniqueName: \"kubernetes.io/projected/067d9f5c-5021-4e9c-bbcc-c8666caf180f-kube-api-access-lqp7q\") pod \"coredns-66bc5c9577-mknhf\" (UID: \"067d9f5c-5021-4e9c-bbcc-c8666caf180f\") " pod="kube-system/coredns-66bc5c9577-mknhf" Jan 24 00:54:10.224028 kubelet[2497]: I0124 00:54:10.223241 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80bbd4c4-8103-4c2d-b518-8fb02d9e29a2-tigera-ca-bundle\") pod \"calico-kube-controllers-598df9794d-p5z6d\" (UID: \"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2\") " pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" Jan 24 00:54:10.224028 kubelet[2497]: I0124 00:54:10.223355 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/549ae7b9-2710-43b2-acf2-03007d90bb7e-goldmane-key-pair\") pod \"goldmane-7c778bb748-8plzs\" (UID: \"549ae7b9-2710-43b2-acf2-03007d90bb7e\") " pod="calico-system/goldmane-7c778bb748-8plzs" Jan 24 00:54:10.224028 kubelet[2497]: I0124 00:54:10.223417 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0518115e-fdb4-4931-870f-91f50e670040-whisker-backend-key-pair\") pod \"whisker-56d98d76cc-sj5nk\" (UID: \"0518115e-fdb4-4931-870f-91f50e670040\") " pod="calico-system/whisker-56d98d76cc-sj5nk" Jan 24 00:54:10.224210 kubelet[2497]: I0124 00:54:10.223445 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrz49\" (UniqueName: \"kubernetes.io/projected/80bbd4c4-8103-4c2d-b518-8fb02d9e29a2-kube-api-access-vrz49\") pod \"calico-kube-controllers-598df9794d-p5z6d\" (UID: \"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2\") " pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" Jan 24 00:54:10.224210 kubelet[2497]: I0124 00:54:10.223490 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/067d9f5c-5021-4e9c-bbcc-c8666caf180f-config-volume\") pod \"coredns-66bc5c9577-mknhf\" (UID: \"067d9f5c-5021-4e9c-bbcc-c8666caf180f\") " pod="kube-system/coredns-66bc5c9577-mknhf" Jan 24 00:54:10.224210 kubelet[2497]: I0124 00:54:10.223518 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh6vl\" (UniqueName: \"kubernetes.io/projected/549ae7b9-2710-43b2-acf2-03007d90bb7e-kube-api-access-sh6vl\") pod \"goldmane-7c778bb748-8plzs\" (UID: \"549ae7b9-2710-43b2-acf2-03007d90bb7e\") " pod="calico-system/goldmane-7c778bb748-8plzs" Jan 24 00:54:10.224210 kubelet[2497]: I0124 00:54:10.223543 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvcx\" (UniqueName: \"kubernetes.io/projected/96853c29-f6ae-4323-a4cd-7dc7ec0a8a17-kube-api-access-ggvcx\") pod \"calico-apiserver-5f99fd9549-l6xtt\" (UID: \"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17\") " pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" Jan 24 00:54:10.224210 kubelet[2497]: I0124 00:54:10.223575 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7n2\" (UniqueName: \"kubernetes.io/projected/0518115e-fdb4-4931-870f-91f50e670040-kube-api-access-7c7n2\") pod \"whisker-56d98d76cc-sj5nk\" (UID: \"0518115e-fdb4-4931-870f-91f50e670040\") " pod="calico-system/whisker-56d98d76cc-sj5nk" Jan 24 00:54:10.224389 kubelet[2497]: I0124 00:54:10.223596 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549ae7b9-2710-43b2-acf2-03007d90bb7e-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-8plzs\" (UID: \"549ae7b9-2710-43b2-acf2-03007d90bb7e\") " pod="calico-system/goldmane-7c778bb748-8plzs" Jan 24 00:54:10.231304 systemd[1]: Created slice kubepods-besteffort-pod96853c29_f6ae_4323_a4cd_7dc7ec0a8a17.slice - libcontainer container kubepods-besteffort-pod96853c29_f6ae_4323_a4cd_7dc7ec0a8a17.slice. Jan 24 00:54:10.241568 systemd[1]: Created slice kubepods-besteffort-pod549ae7b9_2710_43b2_acf2_03007d90bb7e.slice - libcontainer container kubepods-besteffort-pod549ae7b9_2710_43b2_acf2_03007d90bb7e.slice. Jan 24 00:54:10.245195 systemd[1]: Created slice kubepods-burstable-pod067d9f5c_5021_4e9c_bbcc_c8666caf180f.slice - libcontainer container kubepods-burstable-pod067d9f5c_5021_4e9c_bbcc_c8666caf180f.slice. Jan 24 00:54:10.325267 systemd[1]: Created slice kubepods-besteffort-podfa4fe547_535b_479c_9c29_60c4ee40c975.slice - libcontainer container kubepods-besteffort-podfa4fe547_535b_479c_9c29_60c4ee40c975.slice. Jan 24 00:54:10.355088 containerd[1458]: time="2026-01-24T00:54:10.354607573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cmnzx,Uid:fa4fe547-535b-479c-9c29-60c4ee40c975,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:10.429677 kubelet[2497]: E0124 00:54:10.428832 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:10.487719 containerd[1458]: time="2026-01-24T00:54:10.487621559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:54:10.510707 kubelet[2497]: E0124 00:54:10.510539 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:10.516666 containerd[1458]: time="2026-01-24T00:54:10.516531216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x44ds,Uid:aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f,Namespace:kube-system,Attempt:0,}" Jan 24 00:54:10.516825 containerd[1458]: time="2026-01-24T00:54:10.516694713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d98d76cc-sj5nk,Uid:0518115e-fdb4-4931-870f-91f50e670040,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:10.519240 containerd[1458]: time="2026-01-24T00:54:10.519190413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598df9794d-p5z6d,Uid:80bbd4c4-8103-4c2d-b518-8fb02d9e29a2,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:10.534162 containerd[1458]: time="2026-01-24T00:54:10.534120765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-84j78,Uid:2c115e27-4279-4a97-b3f2-127b5b368e0a,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:54:10.537773 containerd[1458]: time="2026-01-24T00:54:10.537705579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-l6xtt,Uid:96853c29-f6ae-4323-a4cd-7dc7ec0a8a17,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:54:10.541227 containerd[1458]: time="2026-01-24T00:54:10.541103999Z" level=error msg="Failed to destroy network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.541700 containerd[1458]: time="2026-01-24T00:54:10.541668859Z" level=error msg="encountered an error cleaning up failed sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.541891 containerd[1458]: time="2026-01-24T00:54:10.541788922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cmnzx,Uid:fa4fe547-535b-479c-9c29-60c4ee40c975,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.552297 kubelet[2497]: E0124 00:54:10.552192 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:10.553691 containerd[1458]: time="2026-01-24T00:54:10.553524366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mknhf,Uid:067d9f5c-5021-4e9c-bbcc-c8666caf180f,Namespace:kube-system,Attempt:0,}" Jan 24 00:54:10.554432 kubelet[2497]: E0124 00:54:10.554268 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.554432 kubelet[2497]: E0124 00:54:10.554333 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:10.554432 kubelet[2497]: E0124 00:54:10.554352 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cmnzx" Jan 24 00:54:10.554707 kubelet[2497]: E0124 00:54:10.554420 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:10.556511 containerd[1458]: time="2026-01-24T00:54:10.556210570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8plzs,Uid:549ae7b9-2710-43b2-acf2-03007d90bb7e,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:10.680917 containerd[1458]: time="2026-01-24T00:54:10.680631220Z" level=error msg="Failed to destroy network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.682788 containerd[1458]: time="2026-01-24T00:54:10.681555749Z" level=error msg="encountered an error cleaning up failed sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.682788 containerd[1458]: time="2026-01-24T00:54:10.681604871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598df9794d-p5z6d,Uid:80bbd4c4-8103-4c2d-b518-8fb02d9e29a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.684116 kubelet[2497]: E0124 00:54:10.682165 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.684116 kubelet[2497]: E0124 00:54:10.682214 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" Jan 24 00:54:10.684116 kubelet[2497]: E0124 00:54:10.682234 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" Jan 24 00:54:10.684273 kubelet[2497]: E0124 00:54:10.682278 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-598df9794d-p5z6d_calico-system(80bbd4c4-8103-4c2d-b518-8fb02d9e29a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-598df9794d-p5z6d_calico-system(80bbd4c4-8103-4c2d-b518-8fb02d9e29a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:10.732820 containerd[1458]: time="2026-01-24T00:54:10.726904747Z" level=error msg="Failed to destroy network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.734144 containerd[1458]: time="2026-01-24T00:54:10.734064592Z" level=error msg="encountered an error cleaning up failed sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.734238 containerd[1458]: time="2026-01-24T00:54:10.734164938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d98d76cc-sj5nk,Uid:0518115e-fdb4-4931-870f-91f50e670040,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.735065 kubelet[2497]: E0124 00:54:10.734449 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.735065 kubelet[2497]: E0124 00:54:10.734522 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d98d76cc-sj5nk" Jan 24 00:54:10.735065 kubelet[2497]: E0124 00:54:10.734549 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d98d76cc-sj5nk" Jan 24 00:54:10.735294 kubelet[2497]: E0124 00:54:10.734643 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56d98d76cc-sj5nk_calico-system(0518115e-fdb4-4931-870f-91f50e670040)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56d98d76cc-sj5nk_calico-system(0518115e-fdb4-4931-870f-91f50e670040)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d98d76cc-sj5nk" podUID="0518115e-fdb4-4931-870f-91f50e670040" Jan 24 00:54:10.776192 containerd[1458]: time="2026-01-24T00:54:10.776106748Z" level=error msg="Failed to destroy network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.776763 containerd[1458]: time="2026-01-24T00:54:10.776684171Z" level=error msg="encountered an error cleaning up failed sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.776818 containerd[1458]: time="2026-01-24T00:54:10.776773457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mknhf,Uid:067d9f5c-5021-4e9c-bbcc-c8666caf180f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.777355 kubelet[2497]: E0124 00:54:10.777205 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.777355 kubelet[2497]: E0124 00:54:10.777304 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mknhf" Jan 24 00:54:10.777355 kubelet[2497]: E0124 00:54:10.777332 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mknhf" Jan 24 00:54:10.777511 kubelet[2497]: E0124 00:54:10.777397 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mknhf_kube-system(067d9f5c-5021-4e9c-bbcc-c8666caf180f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mknhf_kube-system(067d9f5c-5021-4e9c-bbcc-c8666caf180f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mknhf" podUID="067d9f5c-5021-4e9c-bbcc-c8666caf180f" Jan 24 00:54:10.786042 containerd[1458]: time="2026-01-24T00:54:10.785826565Z" level=error msg="Failed to destroy network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.786769 containerd[1458]: time="2026-01-24T00:54:10.786738871Z" level=error msg="encountered an error cleaning up failed sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.787065 containerd[1458]: time="2026-01-24T00:54:10.786850919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-l6xtt,Uid:96853c29-f6ae-4323-a4cd-7dc7ec0a8a17,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.787718 kubelet[2497]: E0124 00:54:10.787693 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.788035 kubelet[2497]: E0124 00:54:10.787923 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" Jan 24 00:54:10.788105 kubelet[2497]: E0124 00:54:10.788090 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" Jan 24 00:54:10.788421 kubelet[2497]: E0124 00:54:10.788186 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f99fd9549-l6xtt_calico-apiserver(96853c29-f6ae-4323-a4cd-7dc7ec0a8a17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f99fd9549-l6xtt_calico-apiserver(96853c29-f6ae-4323-a4cd-7dc7ec0a8a17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:10.791018 containerd[1458]: time="2026-01-24T00:54:10.790985174Z" level=error msg="Failed to destroy network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.791680 containerd[1458]: time="2026-01-24T00:54:10.791584759Z" level=error msg="encountered an error cleaning up failed sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.791680 containerd[1458]: time="2026-01-24T00:54:10.791671811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x44ds,Uid:aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.792080 kubelet[2497]: E0124 00:54:10.791843 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.792080 kubelet[2497]: E0124 00:54:10.791912 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x44ds" Jan 24 00:54:10.792080 kubelet[2497]: E0124 00:54:10.792001 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x44ds" Jan 24 00:54:10.792202 kubelet[2497]: E0124 00:54:10.792037 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x44ds_kube-system(aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x44ds_kube-system(aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x44ds" podUID="aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f" Jan 24 00:54:10.803406 containerd[1458]: time="2026-01-24T00:54:10.803340705Z" level=error msg="Failed to destroy network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.803912 containerd[1458]: time="2026-01-24T00:54:10.803788738Z" level=error msg="encountered an error cleaning up failed sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.804009 containerd[1458]: time="2026-01-24T00:54:10.803919291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-84j78,Uid:2c115e27-4279-4a97-b3f2-127b5b368e0a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.804288 kubelet[2497]: E0124 00:54:10.804204 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.804288 kubelet[2497]: E0124 00:54:10.804246 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" Jan 24 00:54:10.804288 kubelet[2497]: E0124 00:54:10.804266 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" Jan 24 00:54:10.804379 kubelet[2497]: E0124 00:54:10.804305 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f99fd9549-84j78_calico-apiserver(2c115e27-4279-4a97-b3f2-127b5b368e0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f99fd9549-84j78_calico-apiserver(2c115e27-4279-4a97-b3f2-127b5b368e0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:10.819088 containerd[1458]: time="2026-01-24T00:54:10.818979457Z" level=error msg="Failed to destroy network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.819492 containerd[1458]: time="2026-01-24T00:54:10.819410218Z" level=error msg="encountered an error cleaning up failed sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.819552 containerd[1458]: time="2026-01-24T00:54:10.819513780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8plzs,Uid:549ae7b9-2710-43b2-acf2-03007d90bb7e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.820063 kubelet[2497]: E0124 00:54:10.819810 2497 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:10.820063 kubelet[2497]: E0124 00:54:10.820018 2497 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8plzs" Jan 24 00:54:10.820063 kubelet[2497]: E0124 00:54:10.820043 2497 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8plzs" Jan 24 00:54:10.820160 kubelet[2497]: E0124 00:54:10.820102 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-8plzs_calico-system(549ae7b9-2710-43b2-acf2-03007d90bb7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-8plzs_calico-system(549ae7b9-2710-43b2-acf2-03007d90bb7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:11.433093 kubelet[2497]: I0124 00:54:11.433029 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:11.440519 kubelet[2497]: I0124 00:54:11.440127 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:11.443251 kubelet[2497]: I0124 00:54:11.443123 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:11.446532 containerd[1458]: time="2026-01-24T00:54:11.445850993Z" level=info msg="StopPodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\"" Jan 24 00:54:11.447083 kubelet[2497]: I0124 00:54:11.446671 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:11.447179 containerd[1458]: time="2026-01-24T00:54:11.447152125Z" level=info msg="StopPodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\"" Jan 24 00:54:11.447356 containerd[1458]: time="2026-01-24T00:54:11.447203735Z" level=info msg="StopPodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\"" Jan 24 00:54:11.447547 containerd[1458]: time="2026-01-24T00:54:11.447522974Z" level=info msg="Ensure that sandbox 1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397 in task-service has been cleanup successfully" Jan 24 00:54:11.448078 containerd[1458]: time="2026-01-24T00:54:11.447561524Z" level=info msg="Ensure that sandbox 7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477 in task-service has been cleanup successfully" Jan 24 00:54:11.448735 containerd[1458]: time="2026-01-24T00:54:11.448585088Z" level=info msg="Ensure that sandbox 36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7 in task-service has been cleanup successfully" Jan 24 00:54:11.449146 containerd[1458]: time="2026-01-24T00:54:11.449046727Z" level=info msg="StopPodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\"" Jan 24 00:54:11.449310 containerd[1458]: time="2026-01-24T00:54:11.449273808Z" level=info msg="Ensure that sandbox d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502 in task-service has been cleanup successfully" Jan 24 00:54:11.454548 kubelet[2497]: I0124 00:54:11.454489 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:11.456000 containerd[1458]: time="2026-01-24T00:54:11.455626601Z" level=info msg="StopPodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\"" Jan 24 00:54:11.456000 containerd[1458]: time="2026-01-24T00:54:11.455819590Z" level=info msg="Ensure that sandbox c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127 in task-service has been cleanup successfully" Jan 24 00:54:11.458540 kubelet[2497]: I0124 00:54:11.457660 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:11.458599 containerd[1458]: time="2026-01-24T00:54:11.458481029Z" level=info msg="StopPodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\"" Jan 24 00:54:11.458652 containerd[1458]: time="2026-01-24T00:54:11.458626349Z" level=info msg="Ensure that sandbox 824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e in task-service has been cleanup successfully" Jan 24 00:54:11.461899 kubelet[2497]: I0124 00:54:11.461730 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:11.463587 containerd[1458]: time="2026-01-24T00:54:11.463018989Z" level=info msg="StopPodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\"" Jan 24 00:54:11.463587 containerd[1458]: time="2026-01-24T00:54:11.463225763Z" level=info msg="Ensure that sandbox cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d in task-service has been cleanup successfully" Jan 24 00:54:11.467554 kubelet[2497]: I0124 00:54:11.467277 2497 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:11.469778 containerd[1458]: time="2026-01-24T00:54:11.469595605Z" level=info msg="StopPodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\"" Jan 24 00:54:11.471796 containerd[1458]: time="2026-01-24T00:54:11.471597949Z" level=info msg="Ensure that sandbox c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694 in task-service has been cleanup successfully" Jan 24 00:54:11.525361 containerd[1458]: time="2026-01-24T00:54:11.525189110Z" level=error msg="StopPodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" failed" error="failed to destroy network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.526652 kubelet[2497]: E0124 00:54:11.526549 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:11.527138 kubelet[2497]: E0124 00:54:11.526810 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397"} Jan 24 00:54:11.527366 kubelet[2497]: E0124 00:54:11.527088 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.528043 kubelet[2497]: E0124 00:54:11.527778 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:11.538007 containerd[1458]: time="2026-01-24T00:54:11.537783924Z" level=error msg="StopPodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" failed" error="failed to destroy network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.541475 kubelet[2497]: E0124 00:54:11.540204 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:11.541475 kubelet[2497]: E0124 00:54:11.540261 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477"} Jan 24 00:54:11.541475 kubelet[2497]: E0124 00:54:11.540302 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"067d9f5c-5021-4e9c-bbcc-c8666caf180f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.541475 kubelet[2497]: E0124 00:54:11.540334 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"067d9f5c-5021-4e9c-bbcc-c8666caf180f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mknhf" podUID="067d9f5c-5021-4e9c-bbcc-c8666caf180f" Jan 24 00:54:11.541852 containerd[1458]: time="2026-01-24T00:54:11.541213862Z" level=error msg="StopPodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" failed" error="failed to destroy network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.542031 kubelet[2497]: E0124 00:54:11.541495 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:11.542031 kubelet[2497]: E0124 00:54:11.541599 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7"} Jan 24 00:54:11.542031 kubelet[2497]: E0124 00:54:11.541623 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c115e27-4279-4a97-b3f2-127b5b368e0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.542031 kubelet[2497]: E0124 00:54:11.541708 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c115e27-4279-4a97-b3f2-127b5b368e0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:11.547371 containerd[1458]: time="2026-01-24T00:54:11.547310385Z" level=error msg="StopPodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" failed" error="failed to destroy network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.547627 kubelet[2497]: E0124 00:54:11.547559 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:11.547627 kubelet[2497]: E0124 00:54:11.547590 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502"} Jan 24 00:54:11.547627 kubelet[2497]: E0124 00:54:11.547612 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.547761 kubelet[2497]: E0124 00:54:11.547630 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:11.556024 containerd[1458]: time="2026-01-24T00:54:11.555397837Z" level=error msg="StopPodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" failed" error="failed to destroy network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.556091 kubelet[2497]: E0124 00:54:11.555669 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:11.556091 kubelet[2497]: E0124 00:54:11.555701 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694"} Jan 24 00:54:11.556091 kubelet[2497]: E0124 00:54:11.555726 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa4fe547-535b-479c-9c29-60c4ee40c975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.556091 kubelet[2497]: E0124 00:54:11.555746 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa4fe547-535b-479c-9c29-60c4ee40c975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:11.557453 containerd[1458]: time="2026-01-24T00:54:11.557427234Z" level=error msg="StopPodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" failed" error="failed to destroy network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.558012 kubelet[2497]: E0124 00:54:11.557806 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:11.558069 kubelet[2497]: E0124 00:54:11.558048 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e"} Jan 24 00:54:11.558133 kubelet[2497]: E0124 00:54:11.558092 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"549ae7b9-2710-43b2-acf2-03007d90bb7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.558262 kubelet[2497]: E0124 00:54:11.558143 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"549ae7b9-2710-43b2-acf2-03007d90bb7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:11.560682 containerd[1458]: time="2026-01-24T00:54:11.560607569Z" level=error msg="StopPodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" failed" error="failed to destroy network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.561251 kubelet[2497]: E0124 00:54:11.561117 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:11.561251 kubelet[2497]: E0124 00:54:11.561185 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127"} Jan 24 00:54:11.561251 kubelet[2497]: E0124 00:54:11.561217 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.561251 kubelet[2497]: E0124 00:54:11.561247 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x44ds" podUID="aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f" Jan 24 00:54:11.561669 containerd[1458]: time="2026-01-24T00:54:11.561628486Z" level=error msg="StopPodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" failed" error="failed to destroy network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:11.561851 kubelet[2497]: E0124 00:54:11.561812 2497 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:11.561915 kubelet[2497]: E0124 00:54:11.561896 2497 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d"} Jan 24 00:54:11.562034 kubelet[2497]: E0124 00:54:11.561918 2497 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0518115e-fdb4-4931-870f-91f50e670040\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:11.562034 kubelet[2497]: E0124 00:54:11.561993 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0518115e-fdb4-4931-870f-91f50e670040\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d98d76cc-sj5nk" podUID="0518115e-fdb4-4931-870f-91f50e670040" Jan 24 00:54:18.140821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759428856.mount: Deactivated successfully. Jan 24 00:54:18.369842 containerd[1458]: time="2026-01-24T00:54:18.369673048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:18.370746 containerd[1458]: time="2026-01-24T00:54:18.370654864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:54:18.372271 containerd[1458]: time="2026-01-24T00:54:18.372185405Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:18.375518 containerd[1458]: time="2026-01-24T00:54:18.375335005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:18.376493 containerd[1458]: time="2026-01-24T00:54:18.375835094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.888148506s" Jan 24 00:54:18.376493 containerd[1458]: time="2026-01-24T00:54:18.375915163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:54:18.395100 containerd[1458]: time="2026-01-24T00:54:18.394848891Z" level=info msg="CreateContainer within sandbox \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:54:18.423300 containerd[1458]: time="2026-01-24T00:54:18.423220663Z" level=info msg="CreateContainer within sandbox \"1a918fb7bb81635489f5d78a2342ef458a13ee4ee54636aeab8134cd2e5dd0ef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"85a192171243997d7b926d7ed26bd11c526a759e9f9f8a4014194d72bfe40867\"" Jan 24 00:54:18.424308 containerd[1458]: time="2026-01-24T00:54:18.424068153Z" level=info msg="StartContainer for \"85a192171243997d7b926d7ed26bd11c526a759e9f9f8a4014194d72bfe40867\"" Jan 24 00:54:18.511332 systemd[1]: Started cri-containerd-85a192171243997d7b926d7ed26bd11c526a759e9f9f8a4014194d72bfe40867.scope - libcontainer container 85a192171243997d7b926d7ed26bd11c526a759e9f9f8a4014194d72bfe40867. Jan 24 00:54:18.565809 containerd[1458]: time="2026-01-24T00:54:18.565739334Z" level=info msg="StartContainer for \"85a192171243997d7b926d7ed26bd11c526a759e9f9f8a4014194d72bfe40867\" returns successfully" Jan 24 00:54:18.694965 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:54:18.695165 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:54:18.842017 containerd[1458]: time="2026-01-24T00:54:18.841815222Z" level=info msg="StopPodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\"" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:18.986 [INFO][3808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:18.988 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" iface="eth0" netns="/var/run/netns/cni-82c21935-0ed4-2173-067d-3e6a9b82a8a9" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:18.989 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" iface="eth0" netns="/var/run/netns/cni-82c21935-0ed4-2173-067d-3e6a9b82a8a9" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:18.989 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" iface="eth0" netns="/var/run/netns/cni-82c21935-0ed4-2173-067d-3e6a9b82a8a9" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:18.990 [INFO][3808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:18.990 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.117 [INFO][3823] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.119 [INFO][3823] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.120 [INFO][3823] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.132 [WARNING][3823] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.132 [INFO][3823] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.134 [INFO][3823] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:19.143827 containerd[1458]: 2026-01-24 00:54:19.137 [INFO][3808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:19.144243 containerd[1458]: time="2026-01-24T00:54:19.144046998Z" level=info msg="TearDown network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" successfully" Jan 24 00:54:19.144243 containerd[1458]: time="2026-01-24T00:54:19.144072645Z" level=info msg="StopPodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" returns successfully" Jan 24 00:54:19.149763 systemd[1]: run-netns-cni\x2d82c21935\x2d0ed4\x2d2173\x2d067d\x2d3e6a9b82a8a9.mount: Deactivated successfully. Jan 24 00:54:19.206079 kubelet[2497]: I0124 00:54:19.205994 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0518115e-fdb4-4931-870f-91f50e670040-whisker-ca-bundle\") pod \"0518115e-fdb4-4931-870f-91f50e670040\" (UID: \"0518115e-fdb4-4931-870f-91f50e670040\") " Jan 24 00:54:19.206545 kubelet[2497]: I0124 00:54:19.206148 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c7n2\" (UniqueName: \"kubernetes.io/projected/0518115e-fdb4-4931-870f-91f50e670040-kube-api-access-7c7n2\") pod \"0518115e-fdb4-4931-870f-91f50e670040\" (UID: \"0518115e-fdb4-4931-870f-91f50e670040\") " Jan 24 00:54:19.206545 kubelet[2497]: I0124 00:54:19.206193 2497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0518115e-fdb4-4931-870f-91f50e670040-whisker-backend-key-pair\") pod \"0518115e-fdb4-4931-870f-91f50e670040\" (UID: \"0518115e-fdb4-4931-870f-91f50e670040\") " Jan 24 00:54:19.206545 kubelet[2497]: I0124 00:54:19.206506 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0518115e-fdb4-4931-870f-91f50e670040-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0518115e-fdb4-4931-870f-91f50e670040" (UID: "0518115e-fdb4-4931-870f-91f50e670040"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:54:19.215270 kubelet[2497]: I0124 00:54:19.215158 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0518115e-fdb4-4931-870f-91f50e670040-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0518115e-fdb4-4931-870f-91f50e670040" (UID: "0518115e-fdb4-4931-870f-91f50e670040"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:54:19.215646 kubelet[2497]: I0124 00:54:19.215592 2497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0518115e-fdb4-4931-870f-91f50e670040-kube-api-access-7c7n2" (OuterVolumeSpecName: "kube-api-access-7c7n2") pod "0518115e-fdb4-4931-870f-91f50e670040" (UID: "0518115e-fdb4-4931-870f-91f50e670040"). InnerVolumeSpecName "kube-api-access-7c7n2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:54:19.216726 systemd[1]: var-lib-kubelet-pods-0518115e\x2dfdb4\x2d4931\x2d870f\x2d91f50e670040-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7c7n2.mount: Deactivated successfully. Jan 24 00:54:19.217077 systemd[1]: var-lib-kubelet-pods-0518115e\x2dfdb4\x2d4931\x2d870f\x2d91f50e670040-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:54:19.306973 kubelet[2497]: I0124 00:54:19.306726 2497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7c7n2\" (UniqueName: \"kubernetes.io/projected/0518115e-fdb4-4931-870f-91f50e670040-kube-api-access-7c7n2\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:19.306973 kubelet[2497]: I0124 00:54:19.306786 2497 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0518115e-fdb4-4931-870f-91f50e670040-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:19.306973 kubelet[2497]: I0124 00:54:19.306796 2497 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0518115e-fdb4-4931-870f-91f50e670040-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:19.501126 kubelet[2497]: E0124 00:54:19.499566 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:19.507402 systemd[1]: Removed slice kubepods-besteffort-pod0518115e_fdb4_4931_870f_91f50e670040.slice - libcontainer container kubepods-besteffort-pod0518115e_fdb4_4931_870f_91f50e670040.slice. Jan 24 00:54:19.530456 kubelet[2497]: I0124 00:54:19.530181 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wt5qq" podStartSLOduration=2.3716308870000002 podStartE2EDuration="15.530067275s" podCreationTimestamp="2026-01-24 00:54:04 +0000 UTC" firstStartedPulling="2026-01-24 00:54:05.218663384 +0000 UTC m=+21.032949520" lastFinishedPulling="2026-01-24 00:54:18.377099771 +0000 UTC m=+34.191385908" observedRunningTime="2026-01-24 00:54:19.528365345 +0000 UTC m=+35.342651480" watchObservedRunningTime="2026-01-24 00:54:19.530067275 +0000 UTC m=+35.344353410" Jan 24 00:54:19.624085 systemd[1]: Created slice kubepods-besteffort-pod3c68c08e_36da_4f47_947e_e20fabb43d39.slice - libcontainer container kubepods-besteffort-pod3c68c08e_36da_4f47_947e_e20fabb43d39.slice. Jan 24 00:54:19.712192 kubelet[2497]: I0124 00:54:19.712045 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c68c08e-36da-4f47-947e-e20fabb43d39-whisker-backend-key-pair\") pod \"whisker-5df7ff5fdf-zht4d\" (UID: \"3c68c08e-36da-4f47-947e-e20fabb43d39\") " pod="calico-system/whisker-5df7ff5fdf-zht4d" Jan 24 00:54:19.712192 kubelet[2497]: I0124 00:54:19.712103 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c68c08e-36da-4f47-947e-e20fabb43d39-whisker-ca-bundle\") pod \"whisker-5df7ff5fdf-zht4d\" (UID: \"3c68c08e-36da-4f47-947e-e20fabb43d39\") " pod="calico-system/whisker-5df7ff5fdf-zht4d" Jan 24 00:54:19.712192 kubelet[2497]: I0124 00:54:19.712124 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn49l\" (UniqueName: \"kubernetes.io/projected/3c68c08e-36da-4f47-947e-e20fabb43d39-kube-api-access-hn49l\") pod \"whisker-5df7ff5fdf-zht4d\" (UID: \"3c68c08e-36da-4f47-947e-e20fabb43d39\") " pod="calico-system/whisker-5df7ff5fdf-zht4d" Jan 24 00:54:19.949707 containerd[1458]: time="2026-01-24T00:54:19.949619344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df7ff5fdf-zht4d,Uid:3c68c08e-36da-4f47-947e-e20fabb43d39,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:20.130060 systemd-networkd[1387]: calia862e98531c: Link UP Jan 24 00:54:20.131277 systemd-networkd[1387]: calia862e98531c: Gained carrier Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:19.991 [INFO][3873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.005 [INFO][3873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0 whisker-5df7ff5fdf- calico-system 3c68c08e-36da-4f47-947e-e20fabb43d39 905 0 2026-01-24 00:54:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5df7ff5fdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5df7ff5fdf-zht4d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia862e98531c [] [] }} ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.005 [INFO][3873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.054 [INFO][3888] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" HandleID="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Workload="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.055 [INFO][3888] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" HandleID="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Workload="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5df7ff5fdf-zht4d", "timestamp":"2026-01-24 00:54:20.054691586 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.055 [INFO][3888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.055 [INFO][3888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.055 [INFO][3888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.069 [INFO][3888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.080 [INFO][3888] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.088 [INFO][3888] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.091 [INFO][3888] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.094 [INFO][3888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.094 [INFO][3888] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.097 [INFO][3888] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.103 [INFO][3888] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.110 [INFO][3888] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.110 [INFO][3888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" host="localhost" Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.110 [INFO][3888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:20.153855 containerd[1458]: 2026-01-24 00:54:20.110 [INFO][3888] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" HandleID="k8s-pod-network.4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Workload="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.154514 containerd[1458]: 2026-01-24 00:54:20.113 [INFO][3873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0", GenerateName:"whisker-5df7ff5fdf-", Namespace:"calico-system", SelfLink:"", UID:"3c68c08e-36da-4f47-947e-e20fabb43d39", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df7ff5fdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5df7ff5fdf-zht4d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia862e98531c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:20.154514 containerd[1458]: 2026-01-24 00:54:20.113 [INFO][3873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.154514 containerd[1458]: 2026-01-24 00:54:20.114 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia862e98531c ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.154514 containerd[1458]: 2026-01-24 00:54:20.131 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.154514 containerd[1458]: 2026-01-24 00:54:20.131 [INFO][3873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0", GenerateName:"whisker-5df7ff5fdf-", Namespace:"calico-system", SelfLink:"", UID:"3c68c08e-36da-4f47-947e-e20fabb43d39", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df7ff5fdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e", Pod:"whisker-5df7ff5fdf-zht4d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia862e98531c", MAC:"5a:79:56:29:dc:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:20.154514 containerd[1458]: 2026-01-24 00:54:20.147 [INFO][3873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e" Namespace="calico-system" Pod="whisker-5df7ff5fdf-zht4d" WorkloadEndpoint="localhost-k8s-whisker--5df7ff5fdf--zht4d-eth0" Jan 24 00:54:20.200749 containerd[1458]: time="2026-01-24T00:54:20.198639055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:20.200749 containerd[1458]: time="2026-01-24T00:54:20.200490604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:20.200749 containerd[1458]: time="2026-01-24T00:54:20.200504500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:20.200749 containerd[1458]: time="2026-01-24T00:54:20.200590340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:20.227849 systemd[1]: run-containerd-runc-k8s.io-4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e-runc.i4TRs2.mount: Deactivated successfully. Jan 24 00:54:20.240184 systemd[1]: Started cri-containerd-4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e.scope - libcontainer container 4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e. Jan 24 00:54:20.284256 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:20.322815 kubelet[2497]: I0124 00:54:20.322432 2497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0518115e-fdb4-4931-870f-91f50e670040" path="/var/lib/kubelet/pods/0518115e-fdb4-4931-870f-91f50e670040/volumes" Jan 24 00:54:20.395674 containerd[1458]: time="2026-01-24T00:54:20.395569304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df7ff5fdf-zht4d,Uid:3c68c08e-36da-4f47-947e-e20fabb43d39,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a4f893eb0f86fb554bd73ffcc7ac6c198d591b36b31b22a7f600c5b2086512e\"" Jan 24 00:54:20.401267 containerd[1458]: time="2026-01-24T00:54:20.401238379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:54:20.505797 kubelet[2497]: E0124 00:54:20.505589 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:20.618797 containerd[1458]: time="2026-01-24T00:54:20.617392488Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:20.636586 containerd[1458]: time="2026-01-24T00:54:20.620850579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:54:20.636586 containerd[1458]: time="2026-01-24T00:54:20.621194096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:54:20.636772 kubelet[2497]: E0124 00:54:20.636728 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:54:20.636839 kubelet[2497]: E0124 00:54:20.636787 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:54:20.637037 kubelet[2497]: E0124 00:54:20.636993 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5df7ff5fdf-zht4d_calico-system(3c68c08e-36da-4f47-947e-e20fabb43d39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:20.639481 containerd[1458]: time="2026-01-24T00:54:20.639300046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:54:20.712153 containerd[1458]: time="2026-01-24T00:54:20.712085523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:20.717027 containerd[1458]: time="2026-01-24T00:54:20.714149417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:54:20.717027 containerd[1458]: time="2026-01-24T00:54:20.714287094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:54:20.717185 kubelet[2497]: E0124 00:54:20.714571 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:54:20.717185 kubelet[2497]: E0124 00:54:20.714623 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:54:20.717185 kubelet[2497]: E0124 00:54:20.714710 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5df7ff5fdf-zht4d_calico-system(3c68c08e-36da-4f47-947e-e20fabb43d39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:20.717321 kubelet[2497]: E0124 00:54:20.714751 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df7ff5fdf-zht4d" podUID="3c68c08e-36da-4f47-947e-e20fabb43d39" Jan 24 00:54:20.718093 kernel: bpftool[4098]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:54:21.090801 systemd-networkd[1387]: vxlan.calico: Link UP Jan 24 00:54:21.090817 systemd-networkd[1387]: vxlan.calico: Gained carrier Jan 24 00:54:21.139352 systemd[1]: run-containerd-runc-k8s.io-85a192171243997d7b926d7ed26bd11c526a759e9f9f8a4014194d72bfe40867-runc.PzuUJV.mount: Deactivated successfully. Jan 24 00:54:21.524133 kubelet[2497]: E0124 00:54:21.523606 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df7ff5fdf-zht4d" podUID="3c68c08e-36da-4f47-947e-e20fabb43d39" Jan 24 00:54:21.745357 systemd-networkd[1387]: calia862e98531c: Gained IPv6LL Jan 24 00:54:22.318325 containerd[1458]: time="2026-01-24T00:54:22.318182670Z" level=info msg="StopPodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\"" Jan 24 00:54:22.322496 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.406 [INFO][4192] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.406 [INFO][4192] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" iface="eth0" netns="/var/run/netns/cni-abda0e7c-1d62-965a-7902-b4e679299647" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.407 [INFO][4192] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" iface="eth0" netns="/var/run/netns/cni-abda0e7c-1d62-965a-7902-b4e679299647" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.407 [INFO][4192] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" iface="eth0" netns="/var/run/netns/cni-abda0e7c-1d62-965a-7902-b4e679299647" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.407 [INFO][4192] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.407 [INFO][4192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.433 [INFO][4201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.433 [INFO][4201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.433 [INFO][4201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.445 [WARNING][4201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.445 [INFO][4201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.448 [INFO][4201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:22.455689 containerd[1458]: 2026-01-24 00:54:22.451 [INFO][4192] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:22.456664 containerd[1458]: time="2026-01-24T00:54:22.456272497Z" level=info msg="TearDown network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" successfully" Jan 24 00:54:22.456664 containerd[1458]: time="2026-01-24T00:54:22.456309767Z" level=info msg="StopPodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" returns successfully" Jan 24 00:54:22.460925 systemd[1]: run-netns-cni\x2dabda0e7c\x2d1d62\x2d965a\x2d7902\x2db4e679299647.mount: Deactivated successfully. Jan 24 00:54:22.466670 containerd[1458]: time="2026-01-24T00:54:22.466586975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-l6xtt,Uid:96853c29-f6ae-4323-a4cd-7dc7ec0a8a17,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:54:22.636516 systemd-networkd[1387]: cali95e4badb67f: Link UP Jan 24 00:54:22.637150 systemd-networkd[1387]: cali95e4badb67f: Gained carrier Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.534 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0 calico-apiserver-5f99fd9549- calico-apiserver 96853c29-f6ae-4323-a4cd-7dc7ec0a8a17 934 0 2026-01-24 00:54:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f99fd9549 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f99fd9549-l6xtt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95e4badb67f [] [] }} ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.534 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.581 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" HandleID="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.581 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" HandleID="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000434420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f99fd9549-l6xtt", "timestamp":"2026-01-24 00:54:22.581157162 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.581 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.581 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.581 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.590 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.599 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.606 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.608 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.611 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.612 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.614 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17 Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.619 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.629 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.629 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" host="localhost" Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.629 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:22.670687 containerd[1458]: 2026-01-24 00:54:22.629 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" HandleID="k8s-pod-network.0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.671739 containerd[1458]: 2026-01-24 00:54:22.633 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f99fd9549-l6xtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95e4badb67f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:22.671739 containerd[1458]: 2026-01-24 00:54:22.633 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.671739 containerd[1458]: 2026-01-24 00:54:22.634 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95e4badb67f ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.671739 containerd[1458]: 2026-01-24 00:54:22.637 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.671739 containerd[1458]: 2026-01-24 00:54:22.638 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17", Pod:"calico-apiserver-5f99fd9549-l6xtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95e4badb67f", MAC:"6a:ea:86:a5:4d:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:22.671739 containerd[1458]: 2026-01-24 00:54:22.654 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-l6xtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:22.711332 containerd[1458]: time="2026-01-24T00:54:22.711108841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:22.713299 containerd[1458]: time="2026-01-24T00:54:22.713186212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:22.713524 containerd[1458]: time="2026-01-24T00:54:22.713341170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:22.714212 containerd[1458]: time="2026-01-24T00:54:22.713723432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:22.751229 systemd[1]: Started cri-containerd-0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17.scope - libcontainer container 0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17. Jan 24 00:54:22.773828 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:22.814401 containerd[1458]: time="2026-01-24T00:54:22.814315643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-l6xtt,Uid:96853c29-f6ae-4323-a4cd-7dc7ec0a8a17,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17\"" Jan 24 00:54:22.817031 containerd[1458]: time="2026-01-24T00:54:22.816709413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:54:22.887840 containerd[1458]: time="2026-01-24T00:54:22.887783344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:22.889837 containerd[1458]: time="2026-01-24T00:54:22.889725102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:54:22.889837 containerd[1458]: time="2026-01-24T00:54:22.889768843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:54:22.890089 kubelet[2497]: E0124 00:54:22.890040 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:22.890089 kubelet[2497]: E0124 00:54:22.890084 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:22.890423 kubelet[2497]: E0124 00:54:22.890148 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f99fd9549-l6xtt_calico-apiserver(96853c29-f6ae-4323-a4cd-7dc7ec0a8a17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:22.890423 kubelet[2497]: E0124 00:54:22.890178 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:23.318033 containerd[1458]: time="2026-01-24T00:54:23.317995231Z" level=info msg="StopPodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\"" Jan 24 00:54:23.318463 containerd[1458]: time="2026-01-24T00:54:23.318359941Z" level=info msg="StopPodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\"" Jan 24 00:54:23.319272 containerd[1458]: time="2026-01-24T00:54:23.318475876Z" level=info msg="StopPodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\"" Jan 24 00:54:23.319272 containerd[1458]: time="2026-01-24T00:54:23.318857778Z" level=info msg="StopPodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\"" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.415 [INFO][4324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.416 [INFO][4324] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" iface="eth0" netns="/var/run/netns/cni-c161f947-fb66-1177-15bd-4ba6e5937dd5" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.416 [INFO][4324] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" iface="eth0" netns="/var/run/netns/cni-c161f947-fb66-1177-15bd-4ba6e5937dd5" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.416 [INFO][4324] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" iface="eth0" netns="/var/run/netns/cni-c161f947-fb66-1177-15bd-4ba6e5937dd5" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.416 [INFO][4324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.417 [INFO][4324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.450 [INFO][4355] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.450 [INFO][4355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.450 [INFO][4355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.470 [WARNING][4355] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.470 [INFO][4355] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.473 [INFO][4355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:23.479429 containerd[1458]: 2026-01-24 00:54:23.476 [INFO][4324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:23.480788 containerd[1458]: time="2026-01-24T00:54:23.480565541Z" level=info msg="TearDown network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" successfully" Jan 24 00:54:23.480788 containerd[1458]: time="2026-01-24T00:54:23.480658342Z" level=info msg="StopPodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" returns successfully" Jan 24 00:54:23.489657 systemd[1]: run-netns-cni\x2dc161f947\x2dfb66\x2d1177\x2d15bd\x2d4ba6e5937dd5.mount: Deactivated successfully. Jan 24 00:54:23.496545 containerd[1458]: time="2026-01-24T00:54:23.496373923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8plzs,Uid:549ae7b9-2710-43b2-acf2-03007d90bb7e,Namespace:calico-system,Attempt:1,}" Jan 24 00:54:23.536846 kubelet[2497]: E0124 00:54:23.536563 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.444 [INFO][4330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.447 [INFO][4330] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" iface="eth0" netns="/var/run/netns/cni-9f6e9f4e-3c89-4be5-f5ea-326cd35232ef" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.447 [INFO][4330] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" iface="eth0" netns="/var/run/netns/cni-9f6e9f4e-3c89-4be5-f5ea-326cd35232ef" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.448 [INFO][4330] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" iface="eth0" netns="/var/run/netns/cni-9f6e9f4e-3c89-4be5-f5ea-326cd35232ef" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.448 [INFO][4330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.448 [INFO][4330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.501 [INFO][4372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.502 [INFO][4372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.502 [INFO][4372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.512 [WARNING][4372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.512 [INFO][4372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.516 [INFO][4372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:23.548588 containerd[1458]: 2026-01-24 00:54:23.538 [INFO][4330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:23.549299 containerd[1458]: time="2026-01-24T00:54:23.549095515Z" level=info msg="TearDown network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" successfully" Jan 24 00:54:23.549299 containerd[1458]: time="2026-01-24T00:54:23.549130360Z" level=info msg="StopPodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" returns successfully" Jan 24 00:54:23.557708 containerd[1458]: time="2026-01-24T00:54:23.557489224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598df9794d-p5z6d,Uid:80bbd4c4-8103-4c2d-b518-8fb02d9e29a2,Namespace:calico-system,Attempt:1,}" Jan 24 00:54:23.566873 systemd[1]: run-netns-cni\x2d9f6e9f4e\x2d3c89\x2d4be5\x2df5ea\x2d326cd35232ef.mount: Deactivated successfully. Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.439 [INFO][4325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.439 [INFO][4325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" iface="eth0" netns="/var/run/netns/cni-14828816-2c82-5984-b42a-814f3abc38b4" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.443 [INFO][4325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" iface="eth0" netns="/var/run/netns/cni-14828816-2c82-5984-b42a-814f3abc38b4" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.444 [INFO][4325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" iface="eth0" netns="/var/run/netns/cni-14828816-2c82-5984-b42a-814f3abc38b4" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.444 [INFO][4325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.444 [INFO][4325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.515 [INFO][4370] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.515 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.516 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.541 [WARNING][4370] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.541 [INFO][4370] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.545 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:23.598911 containerd[1458]: 2026-01-24 00:54:23.553 [INFO][4325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:23.602065 containerd[1458]: time="2026-01-24T00:54:23.600086620Z" level=info msg="TearDown network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" successfully" Jan 24 00:54:23.602065 containerd[1458]: time="2026-01-24T00:54:23.600116085Z" level=info msg="StopPodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" returns successfully" Jan 24 00:54:23.607274 containerd[1458]: time="2026-01-24T00:54:23.607168340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-84j78,Uid:2c115e27-4279-4a97-b3f2-127b5b368e0a,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:54:23.608109 systemd[1]: run-netns-cni\x2d14828816\x2d2c82\x2d5984\x2db42a\x2d814f3abc38b4.mount: Deactivated successfully. Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.417 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.417 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" iface="eth0" netns="/var/run/netns/cni-8b35082f-50fd-549a-36e8-a6d854517427" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.418 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" iface="eth0" netns="/var/run/netns/cni-8b35082f-50fd-549a-36e8-a6d854517427" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.419 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" iface="eth0" netns="/var/run/netns/cni-8b35082f-50fd-549a-36e8-a6d854517427" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.420 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.420 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.519 [INFO][4362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.519 [INFO][4362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.545 [INFO][4362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.599 [WARNING][4362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.599 [INFO][4362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.603 [INFO][4362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:23.626669 containerd[1458]: 2026-01-24 00:54:23.618 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:23.628238 containerd[1458]: time="2026-01-24T00:54:23.628164809Z" level=info msg="TearDown network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" successfully" Jan 24 00:54:23.628304 containerd[1458]: time="2026-01-24T00:54:23.628290705Z" level=info msg="StopPodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" returns successfully" Jan 24 00:54:23.635015 kubelet[2497]: E0124 00:54:23.633725 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:23.635248 containerd[1458]: time="2026-01-24T00:54:23.635160585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mknhf,Uid:067d9f5c-5021-4e9c-bbcc-c8666caf180f,Namespace:kube-system,Attempt:1,}" Jan 24 00:54:23.757665 systemd-networkd[1387]: calif7d5065c442: Link UP Jan 24 00:54:23.758563 systemd-networkd[1387]: calif7d5065c442: Gained carrier Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.620 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--8plzs-eth0 goldmane-7c778bb748- calico-system 549ae7b9-2710-43b2-acf2-03007d90bb7e 951 0 2026-01-24 00:54:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-8plzs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif7d5065c442 [] [] }} ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.621 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.691 [INFO][4418] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" HandleID="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.692 [INFO][4418] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" HandleID="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-8plzs", "timestamp":"2026-01-24 00:54:23.691445105 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.692 [INFO][4418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.692 [INFO][4418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.692 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.701 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.709 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.716 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.719 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.724 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.724 [INFO][4418] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.726 [INFO][4418] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018 Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.735 [INFO][4418] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.744 [INFO][4418] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.744 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" host="localhost" Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.745 [INFO][4418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:23.780074 containerd[1458]: 2026-01-24 00:54:23.745 [INFO][4418] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" HandleID="k8s-pod-network.615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.780678 containerd[1458]: 2026-01-24 00:54:23.749 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8plzs-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"549ae7b9-2710-43b2-acf2-03007d90bb7e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-8plzs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif7d5065c442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:23.780678 containerd[1458]: 2026-01-24 00:54:23.750 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.780678 containerd[1458]: 2026-01-24 00:54:23.750 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7d5065c442 ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.780678 containerd[1458]: 2026-01-24 00:54:23.758 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.780678 containerd[1458]: 2026-01-24 00:54:23.759 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8plzs-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"549ae7b9-2710-43b2-acf2-03007d90bb7e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018", Pod:"goldmane-7c778bb748-8plzs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif7d5065c442", MAC:"5a:8a:90:fa:0e:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:23.780678 containerd[1458]: 2026-01-24 00:54:23.775 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018" Namespace="calico-system" Pod="goldmane-7c778bb748-8plzs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:23.814398 containerd[1458]: time="2026-01-24T00:54:23.813304943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:23.814398 containerd[1458]: time="2026-01-24T00:54:23.813465482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:23.814398 containerd[1458]: time="2026-01-24T00:54:23.813481192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:23.814398 containerd[1458]: time="2026-01-24T00:54:23.813595434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:23.856290 systemd[1]: Started cri-containerd-615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018.scope - libcontainer container 615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018. Jan 24 00:54:23.861512 systemd-networkd[1387]: cali11126ef0e39: Link UP Jan 24 00:54:23.866073 systemd-networkd[1387]: cali11126ef0e39: Gained carrier Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.660 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0 calico-kube-controllers-598df9794d- calico-system 80bbd4c4-8103-4c2d-b518-8fb02d9e29a2 953 0 2026-01-24 00:54:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:598df9794d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-598df9794d-p5z6d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali11126ef0e39 [] [] }} ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.660 [INFO][4405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.713 [INFO][4451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" HandleID="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.714 [INFO][4451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" HandleID="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139e10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-598df9794d-p5z6d", "timestamp":"2026-01-24 00:54:23.713381118 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.714 [INFO][4451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.745 [INFO][4451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.745 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.803 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.815 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.825 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.829 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.834 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.834 [INFO][4451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.839 [INFO][4451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3 Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.845 [INFO][4451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.852 [INFO][4451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.852 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" host="localhost" Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.852 [INFO][4451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:23.891411 containerd[1458]: 2026-01-24 00:54:23.852 [INFO][4451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" HandleID="k8s-pod-network.af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.894236 containerd[1458]: 2026-01-24 00:54:23.856 [INFO][4405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0", GenerateName:"calico-kube-controllers-598df9794d-", Namespace:"calico-system", SelfLink:"", UID:"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598df9794d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-598df9794d-p5z6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11126ef0e39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:23.894236 containerd[1458]: 2026-01-24 00:54:23.856 [INFO][4405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.894236 containerd[1458]: 2026-01-24 00:54:23.856 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11126ef0e39 ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.894236 containerd[1458]: 2026-01-24 00:54:23.865 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.894236 containerd[1458]: 2026-01-24 00:54:23.867 [INFO][4405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0", GenerateName:"calico-kube-controllers-598df9794d-", Namespace:"calico-system", SelfLink:"", UID:"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598df9794d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3", Pod:"calico-kube-controllers-598df9794d-p5z6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11126ef0e39", MAC:"1a:f2:bb:bc:9d:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:23.894236 containerd[1458]: 2026-01-24 00:54:23.884 [INFO][4405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3" Namespace="calico-system" Pod="calico-kube-controllers-598df9794d-p5z6d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:23.904654 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:23.943104 containerd[1458]: time="2026-01-24T00:54:23.942748058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:23.943104 containerd[1458]: time="2026-01-24T00:54:23.943050752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:23.943104 containerd[1458]: time="2026-01-24T00:54:23.943074707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:23.946221 containerd[1458]: time="2026-01-24T00:54:23.946035785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:23.979412 systemd[1]: Started cri-containerd-af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3.scope - libcontainer container af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3. Jan 24 00:54:23.990207 containerd[1458]: time="2026-01-24T00:54:23.990171295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8plzs,Uid:549ae7b9-2710-43b2-acf2-03007d90bb7e,Namespace:calico-system,Attempt:1,} returns sandbox id \"615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018\"" Jan 24 00:54:23.993791 systemd-networkd[1387]: califc84ac12955: Link UP Jan 24 00:54:23.996076 systemd-networkd[1387]: califc84ac12955: Gained carrier Jan 24 00:54:24.000178 containerd[1458]: time="2026-01-24T00:54:24.000110204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:54:24.040120 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.725 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--mknhf-eth0 coredns-66bc5c9577- kube-system 067d9f5c-5021-4e9c-bbcc-c8666caf180f 950 0 2026-01-24 00:53:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-mknhf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc84ac12955 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.725 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.773 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" HandleID="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.774 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" HandleID="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c71f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-mknhf", "timestamp":"2026-01-24 00:54:23.773613878 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.774 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.853 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.853 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.909 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.922 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.932 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.938 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.946 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.946 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.950 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48 Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.955 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.969 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.969 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" host="localhost" Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.969 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:24.047578 containerd[1458]: 2026-01-24 00:54:23.969 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" HandleID="k8s-pod-network.9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.048463 containerd[1458]: 2026-01-24 00:54:23.974 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mknhf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"067d9f5c-5021-4e9c-bbcc-c8666caf180f", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-mknhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84ac12955", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:24.048463 containerd[1458]: 2026-01-24 00:54:23.986 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.048463 containerd[1458]: 2026-01-24 00:54:23.986 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc84ac12955 ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.048463 containerd[1458]: 2026-01-24 00:54:24.001 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.048463 containerd[1458]: 2026-01-24 00:54:24.008 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mknhf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"067d9f5c-5021-4e9c-bbcc-c8666caf180f", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48", Pod:"coredns-66bc5c9577-mknhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84ac12955", MAC:"1a:1f:e0:1b:36:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:24.048463 containerd[1458]: 2026-01-24 00:54:24.034 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48" Namespace="kube-system" Pod="coredns-66bc5c9577-mknhf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:24.094286 containerd[1458]: time="2026-01-24T00:54:24.094058346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:24.099751 containerd[1458]: time="2026-01-24T00:54:24.099457315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:24.099861 containerd[1458]: time="2026-01-24T00:54:24.099766202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:54:24.100113 containerd[1458]: time="2026-01-24T00:54:24.099849056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:54:24.101185 kubelet[2497]: E0124 00:54:24.100868 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:54:24.101185 kubelet[2497]: E0124 00:54:24.101029 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:54:24.104086 kubelet[2497]: E0124 00:54:24.101198 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8plzs_calico-system(549ae7b9-2710-43b2-acf2-03007d90bb7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:24.104086 kubelet[2497]: E0124 00:54:24.101244 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:24.104152 containerd[1458]: time="2026-01-24T00:54:24.101054731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:24.104152 containerd[1458]: time="2026-01-24T00:54:24.101351645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:24.104152 containerd[1458]: time="2026-01-24T00:54:24.102486089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:24.111207 systemd-networkd[1387]: calie014acedbd4: Link UP Jan 24 00:54:24.113435 systemd-networkd[1387]: calie014acedbd4: Gained carrier Jan 24 00:54:24.116682 systemd-networkd[1387]: cali95e4badb67f: Gained IPv6LL Jan 24 00:54:24.135780 containerd[1458]: time="2026-01-24T00:54:24.135279639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598df9794d-p5z6d,Uid:80bbd4c4-8103-4c2d-b518-8fb02d9e29a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3\"" Jan 24 00:54:24.142433 containerd[1458]: time="2026-01-24T00:54:24.142248855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:54:24.160462 systemd[1]: Started cri-containerd-9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48.scope - libcontainer container 9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48. Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.710 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0 calico-apiserver-5f99fd9549- calico-apiserver 2c115e27-4279-4a97-b3f2-127b5b368e0a 952 0 2026-01-24 00:54:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f99fd9549 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f99fd9549-84j78 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie014acedbd4 [] [] }} ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.710 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.778 [INFO][4465] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" HandleID="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.778 [INFO][4465] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" HandleID="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f99fd9549-84j78", "timestamp":"2026-01-24 00:54:23.778054261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.778 [INFO][4465] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.969 [INFO][4465] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:23.970 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.004 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.023 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.036 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.042 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.045 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.045 [INFO][4465] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.050 [INFO][4465] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.067 [INFO][4465] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.080 [INFO][4465] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.080 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" host="localhost" Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.081 [INFO][4465] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:24.169523 containerd[1458]: 2026-01-24 00:54:24.081 [INFO][4465] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" HandleID="k8s-pod-network.87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.170374 containerd[1458]: 2026-01-24 00:54:24.096 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c115e27-4279-4a97-b3f2-127b5b368e0a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f99fd9549-84j78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie014acedbd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:24.170374 containerd[1458]: 2026-01-24 00:54:24.096 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.170374 containerd[1458]: 2026-01-24 00:54:24.096 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie014acedbd4 ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.170374 containerd[1458]: 2026-01-24 00:54:24.126 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.170374 containerd[1458]: 2026-01-24 00:54:24.128 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c115e27-4279-4a97-b3f2-127b5b368e0a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b", Pod:"calico-apiserver-5f99fd9549-84j78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie014acedbd4", MAC:"72:3c:bd:28:1c:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:24.170374 containerd[1458]: 2026-01-24 00:54:24.163 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b" Namespace="calico-apiserver" Pod="calico-apiserver-5f99fd9549-84j78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:24.186516 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:24.214821 containerd[1458]: time="2026-01-24T00:54:24.214603969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:24.214821 containerd[1458]: time="2026-01-24T00:54:24.214698435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:24.214821 containerd[1458]: time="2026-01-24T00:54:24.214731437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:24.215415 containerd[1458]: time="2026-01-24T00:54:24.214838286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:24.224798 containerd[1458]: time="2026-01-24T00:54:24.224725468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mknhf,Uid:067d9f5c-5021-4e9c-bbcc-c8666caf180f,Namespace:kube-system,Attempt:1,} returns sandbox id \"9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48\"" Jan 24 00:54:24.225370 containerd[1458]: time="2026-01-24T00:54:24.225301883Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:24.227264 containerd[1458]: time="2026-01-24T00:54:24.227124430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:54:24.227398 kubelet[2497]: E0124 00:54:24.227238 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:24.227544 containerd[1458]: time="2026-01-24T00:54:24.227260905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:54:24.227780 kubelet[2497]: E0124 00:54:24.227445 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:54:24.227780 kubelet[2497]: E0124 00:54:24.227502 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:54:24.227780 kubelet[2497]: E0124 00:54:24.227666 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598df9794d-p5z6d_calico-system(80bbd4c4-8103-4c2d-b518-8fb02d9e29a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:24.228465 kubelet[2497]: E0124 00:54:24.227766 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:24.233832 containerd[1458]: time="2026-01-24T00:54:24.233729547Z" level=info msg="CreateContainer within sandbox \"9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:54:24.245231 systemd[1]: Started cri-containerd-87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b.scope - libcontainer container 87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b. Jan 24 00:54:24.271495 containerd[1458]: time="2026-01-24T00:54:24.271324895Z" level=info msg="CreateContainer within sandbox \"9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9b3a6b74f8b6c65d3dd63d02426cfff3ae3aa431f538b4b4210101c53759a7f\"" Jan 24 00:54:24.273157 containerd[1458]: time="2026-01-24T00:54:24.273061934Z" level=info msg="StartContainer for \"d9b3a6b74f8b6c65d3dd63d02426cfff3ae3aa431f538b4b4210101c53759a7f\"" Jan 24 00:54:24.281127 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:24.324174 systemd[1]: Started cri-containerd-d9b3a6b74f8b6c65d3dd63d02426cfff3ae3aa431f538b4b4210101c53759a7f.scope - libcontainer container d9b3a6b74f8b6c65d3dd63d02426cfff3ae3aa431f538b4b4210101c53759a7f. Jan 24 00:54:24.327317 containerd[1458]: time="2026-01-24T00:54:24.327118296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f99fd9549-84j78,Uid:2c115e27-4279-4a97-b3f2-127b5b368e0a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b\"" Jan 24 00:54:24.331392 containerd[1458]: time="2026-01-24T00:54:24.331341837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:54:24.376057 containerd[1458]: time="2026-01-24T00:54:24.375766039Z" level=info msg="StartContainer for \"d9b3a6b74f8b6c65d3dd63d02426cfff3ae3aa431f538b4b4210101c53759a7f\" returns successfully" Jan 24 00:54:24.399640 containerd[1458]: time="2026-01-24T00:54:24.399468114Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:24.401407 containerd[1458]: time="2026-01-24T00:54:24.401276967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:54:24.401407 containerd[1458]: time="2026-01-24T00:54:24.401330074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:54:24.401656 kubelet[2497]: E0124 00:54:24.401573 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:24.401656 kubelet[2497]: E0124 00:54:24.401631 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:24.402050 kubelet[2497]: E0124 00:54:24.401756 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f99fd9549-84j78_calico-apiserver(2c115e27-4279-4a97-b3f2-127b5b368e0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:24.402050 kubelet[2497]: E0124 00:54:24.401801 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:24.494136 systemd[1]: run-netns-cni\x2d8b35082f\x2d50fd\x2d549a\x2d36e8\x2da6d854517427.mount: Deactivated successfully. Jan 24 00:54:24.538519 kubelet[2497]: E0124 00:54:24.538460 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:24.544519 kubelet[2497]: E0124 00:54:24.544465 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:24.549385 kubelet[2497]: E0124 00:54:24.549274 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:24.554358 kubelet[2497]: E0124 00:54:24.554188 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:24.554358 kubelet[2497]: E0124 00:54:24.554276 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:24.583875 kubelet[2497]: I0124 00:54:24.583518 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mknhf" podStartSLOduration=34.583503541 podStartE2EDuration="34.583503541s" podCreationTimestamp="2026-01-24 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:54:24.557668187 +0000 UTC m=+40.371954334" watchObservedRunningTime="2026-01-24 00:54:24.583503541 +0000 UTC m=+40.397789677" Jan 24 00:54:25.009366 systemd-networkd[1387]: cali11126ef0e39: Gained IPv6LL Jan 24 00:54:25.201392 systemd-networkd[1387]: calie014acedbd4: Gained IPv6LL Jan 24 00:54:25.393228 systemd-networkd[1387]: calif7d5065c442: Gained IPv6LL Jan 24 00:54:25.555086 kubelet[2497]: E0124 00:54:25.554649 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:25.555668 kubelet[2497]: E0124 00:54:25.555569 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:25.555668 kubelet[2497]: E0124 00:54:25.555602 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:25.555668 kubelet[2497]: E0124 00:54:25.555617 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:25.777265 systemd-networkd[1387]: califc84ac12955: Gained IPv6LL Jan 24 00:54:26.321004 containerd[1458]: time="2026-01-24T00:54:26.320812198Z" level=info msg="StopPodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\"" Jan 24 00:54:26.321445 containerd[1458]: time="2026-01-24T00:54:26.321402598Z" level=info msg="StopPodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\"" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.403 [INFO][4755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.405 [INFO][4755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" iface="eth0" netns="/var/run/netns/cni-27797c92-ac7b-e3f3-8a47-983fe6095c2d" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.407 [INFO][4755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" iface="eth0" netns="/var/run/netns/cni-27797c92-ac7b-e3f3-8a47-983fe6095c2d" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.407 [INFO][4755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" iface="eth0" netns="/var/run/netns/cni-27797c92-ac7b-e3f3-8a47-983fe6095c2d" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.407 [INFO][4755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.407 [INFO][4755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.437 [INFO][4767] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.437 [INFO][4767] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.437 [INFO][4767] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.446 [WARNING][4767] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.446 [INFO][4767] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.448 [INFO][4767] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:26.455491 containerd[1458]: 2026-01-24 00:54:26.451 [INFO][4755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:26.457250 containerd[1458]: time="2026-01-24T00:54:26.456807469Z" level=info msg="TearDown network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" successfully" Jan 24 00:54:26.457250 containerd[1458]: time="2026-01-24T00:54:26.456841302Z" level=info msg="StopPodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" returns successfully" Jan 24 00:54:26.461727 systemd[1]: run-netns-cni\x2d27797c92\x2dac7b\x2de3f3\x2d8a47\x2d983fe6095c2d.mount: Deactivated successfully. Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.408 [INFO][4750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.409 [INFO][4750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" iface="eth0" netns="/var/run/netns/cni-17efd30d-ee21-b40b-7864-38aa09c68748" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.410 [INFO][4750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" iface="eth0" netns="/var/run/netns/cni-17efd30d-ee21-b40b-7864-38aa09c68748" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.410 [INFO][4750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" iface="eth0" netns="/var/run/netns/cni-17efd30d-ee21-b40b-7864-38aa09c68748" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.410 [INFO][4750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.410 [INFO][4750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.437 [INFO][4769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.438 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.448 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.457 [WARNING][4769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.458 [INFO][4769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.466 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:26.473306 containerd[1458]: 2026-01-24 00:54:26.470 [INFO][4750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:26.473867 containerd[1458]: time="2026-01-24T00:54:26.473640645Z" level=info msg="TearDown network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" successfully" Jan 24 00:54:26.473867 containerd[1458]: time="2026-01-24T00:54:26.473672275Z" level=info msg="StopPodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" returns successfully" Jan 24 00:54:26.476759 systemd[1]: run-netns-cni\x2d17efd30d\x2dee21\x2db40b\x2d7864\x2d38aa09c68748.mount: Deactivated successfully. Jan 24 00:54:26.477702 containerd[1458]: time="2026-01-24T00:54:26.477056134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cmnzx,Uid:fa4fe547-535b-479c-9c29-60c4ee40c975,Namespace:calico-system,Attempt:1,}" Jan 24 00:54:26.479577 kubelet[2497]: E0124 00:54:26.479525 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:26.480440 containerd[1458]: time="2026-01-24T00:54:26.480283850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x44ds,Uid:aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f,Namespace:kube-system,Attempt:1,}" Jan 24 00:54:26.561734 kubelet[2497]: E0124 00:54:26.561159 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:26.674028 systemd-networkd[1387]: califc0c874807d: Link UP Jan 24 00:54:26.676215 systemd-networkd[1387]: califc0c874807d: Gained carrier Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.554 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cmnzx-eth0 csi-node-driver- calico-system fa4fe547-535b-479c-9c29-60c4ee40c975 1028 0 2026-01-24 00:54:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cmnzx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califc0c874807d [] [] }} ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.554 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.602 [INFO][4818] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" HandleID="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.602 [INFO][4818] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" HandleID="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cmnzx", "timestamp":"2026-01-24 00:54:26.60224184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.602 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.602 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.602 [INFO][4818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.612 [INFO][4818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.627 [INFO][4818] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.635 [INFO][4818] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.638 [INFO][4818] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.641 [INFO][4818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.641 [INFO][4818] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.643 [INFO][4818] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456 Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.650 [INFO][4818] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.664 [INFO][4818] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.665 [INFO][4818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" host="localhost" Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.665 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:26.698351 containerd[1458]: 2026-01-24 00:54:26.665 [INFO][4818] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" HandleID="k8s-pod-network.af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.699449 containerd[1458]: 2026-01-24 00:54:26.669 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cmnzx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa4fe547-535b-479c-9c29-60c4ee40c975", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cmnzx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califc0c874807d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:26.699449 containerd[1458]: 2026-01-24 00:54:26.669 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.699449 containerd[1458]: 2026-01-24 00:54:26.669 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc0c874807d ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.699449 containerd[1458]: 2026-01-24 00:54:26.677 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.699449 containerd[1458]: 2026-01-24 00:54:26.677 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cmnzx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa4fe547-535b-479c-9c29-60c4ee40c975", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456", Pod:"csi-node-driver-cmnzx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califc0c874807d", MAC:"7e:7b:1b:f6:89:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:26.699449 containerd[1458]: 2026-01-24 00:54:26.695 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456" Namespace="calico-system" Pod="csi-node-driver-cmnzx" WorkloadEndpoint="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:26.729417 containerd[1458]: time="2026-01-24T00:54:26.729198322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:26.729417 containerd[1458]: time="2026-01-24T00:54:26.729289583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:26.729417 containerd[1458]: time="2026-01-24T00:54:26.729305132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:26.734050 containerd[1458]: time="2026-01-24T00:54:26.733293235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:26.769309 systemd[1]: Started cri-containerd-af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456.scope - libcontainer container af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456. Jan 24 00:54:26.784024 systemd-networkd[1387]: cali3caae14ad9e: Link UP Jan 24 00:54:26.785177 systemd-networkd[1387]: cali3caae14ad9e: Gained carrier Jan 24 00:54:26.795869 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.580 [INFO][4800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--x44ds-eth0 coredns-66bc5c9577- kube-system aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f 1029 0 2026-01-24 00:53:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-x44ds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3caae14ad9e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.580 [INFO][4800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.620 [INFO][4826] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" HandleID="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.620 [INFO][4826] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" HandleID="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001398f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-x44ds", "timestamp":"2026-01-24 00:54:26.620399395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.620 [INFO][4826] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.665 [INFO][4826] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.665 [INFO][4826] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.713 [INFO][4826] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.724 [INFO][4826] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.737 [INFO][4826] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.740 [INFO][4826] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.744 [INFO][4826] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.744 [INFO][4826] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.747 [INFO][4826] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415 Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.755 [INFO][4826] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.769 [INFO][4826] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.769 [INFO][4826] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" host="localhost" Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.770 [INFO][4826] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:26.808578 containerd[1458]: 2026-01-24 00:54:26.770 [INFO][4826] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" HandleID="k8s-pod-network.ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.809685 containerd[1458]: 2026-01-24 00:54:26.775 [INFO][4800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x44ds-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-x44ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3caae14ad9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:26.809685 containerd[1458]: 2026-01-24 00:54:26.776 [INFO][4800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.809685 containerd[1458]: 2026-01-24 00:54:26.776 [INFO][4800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3caae14ad9e ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.809685 containerd[1458]: 2026-01-24 00:54:26.785 [INFO][4800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.809685 containerd[1458]: 2026-01-24 00:54:26.786 [INFO][4800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x44ds-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415", Pod:"coredns-66bc5c9577-x44ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3caae14ad9e", MAC:"02:67:ca:9b:4b:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:26.809685 containerd[1458]: 2026-01-24 00:54:26.802 [INFO][4800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415" Namespace="kube-system" Pod="coredns-66bc5c9577-x44ds" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:26.826625 containerd[1458]: time="2026-01-24T00:54:26.826426997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cmnzx,Uid:fa4fe547-535b-479c-9c29-60c4ee40c975,Namespace:calico-system,Attempt:1,} returns sandbox id \"af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456\"" Jan 24 00:54:26.830395 containerd[1458]: time="2026-01-24T00:54:26.830357872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:54:26.849291 containerd[1458]: time="2026-01-24T00:54:26.849157171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:26.849291 containerd[1458]: time="2026-01-24T00:54:26.849249143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:26.849468 containerd[1458]: time="2026-01-24T00:54:26.849274801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:26.849604 containerd[1458]: time="2026-01-24T00:54:26.849524145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:26.893188 systemd[1]: Started cri-containerd-ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415.scope - libcontainer container ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415. Jan 24 00:54:26.901704 containerd[1458]: time="2026-01-24T00:54:26.901661938Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:26.903548 containerd[1458]: time="2026-01-24T00:54:26.903447335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:54:26.903850 containerd[1458]: time="2026-01-24T00:54:26.903648891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:54:26.904518 kubelet[2497]: E0124 00:54:26.904471 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:54:26.904642 kubelet[2497]: E0124 00:54:26.904620 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:54:26.906146 kubelet[2497]: E0124 00:54:26.904807 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:26.908728 containerd[1458]: time="2026-01-24T00:54:26.908314795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:54:26.915548 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:26.955601 containerd[1458]: time="2026-01-24T00:54:26.955356087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x44ds,Uid:aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415\"" Jan 24 00:54:26.957146 kubelet[2497]: E0124 00:54:26.957090 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:26.968726 containerd[1458]: time="2026-01-24T00:54:26.968648103Z" level=info msg="CreateContainer within sandbox \"ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:54:26.972230 containerd[1458]: time="2026-01-24T00:54:26.972167551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:26.975619 containerd[1458]: time="2026-01-24T00:54:26.975471041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:54:26.975619 containerd[1458]: time="2026-01-24T00:54:26.975590764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:54:26.976051 kubelet[2497]: E0124 00:54:26.975855 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:54:26.976051 kubelet[2497]: E0124 00:54:26.976031 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:54:26.976378 kubelet[2497]: E0124 00:54:26.976296 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:26.978544 kubelet[2497]: E0124 00:54:26.978484 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:26.991637 containerd[1458]: time="2026-01-24T00:54:26.991562824Z" level=info msg="CreateContainer within sandbox \"ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f0fc654a213805dd3fc5149443176e462cc160d213eb3e44db38fa25d197161\"" Jan 24 00:54:26.992488 containerd[1458]: time="2026-01-24T00:54:26.992417144Z" level=info msg="StartContainer for \"9f0fc654a213805dd3fc5149443176e462cc160d213eb3e44db38fa25d197161\"" Jan 24 00:54:27.041197 systemd[1]: Started cri-containerd-9f0fc654a213805dd3fc5149443176e462cc160d213eb3e44db38fa25d197161.scope - libcontainer container 9f0fc654a213805dd3fc5149443176e462cc160d213eb3e44db38fa25d197161. Jan 24 00:54:27.086168 containerd[1458]: time="2026-01-24T00:54:27.086131391Z" level=info msg="StartContainer for \"9f0fc654a213805dd3fc5149443176e462cc160d213eb3e44db38fa25d197161\" returns successfully" Jan 24 00:54:27.566233 kubelet[2497]: E0124 00:54:27.565801 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:27.568655 kubelet[2497]: E0124 00:54:27.568608 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:27.570132 kubelet[2497]: E0124 00:54:27.570085 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:27.582340 kubelet[2497]: I0124 00:54:27.582237 2497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x44ds" podStartSLOduration=37.582220137 podStartE2EDuration="37.582220137s" podCreationTimestamp="2026-01-24 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:54:27.581082764 +0000 UTC m=+43.395368919" watchObservedRunningTime="2026-01-24 00:54:27.582220137 +0000 UTC m=+43.396506272" Jan 24 00:54:27.825465 systemd-networkd[1387]: califc0c874807d: Gained IPv6LL Jan 24 00:54:28.574213 kubelet[2497]: E0124 00:54:28.574095 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:28.579114 kubelet[2497]: E0124 00:54:28.578783 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:28.593279 systemd-networkd[1387]: cali3caae14ad9e: Gained IPv6LL Jan 24 00:54:29.577722 kubelet[2497]: E0124 00:54:29.577629 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:34.319827 containerd[1458]: time="2026-01-24T00:54:34.319778609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:54:34.393226 containerd[1458]: time="2026-01-24T00:54:34.393066772Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:34.394760 containerd[1458]: time="2026-01-24T00:54:34.394619347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:54:34.394760 containerd[1458]: time="2026-01-24T00:54:34.394702041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:54:34.395007 kubelet[2497]: E0124 00:54:34.394888 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:54:34.395402 kubelet[2497]: E0124 00:54:34.395042 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:54:34.395402 kubelet[2497]: E0124 00:54:34.395154 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5df7ff5fdf-zht4d_calico-system(3c68c08e-36da-4f47-947e-e20fabb43d39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:34.396456 containerd[1458]: time="2026-01-24T00:54:34.396176066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:54:34.464654 containerd[1458]: time="2026-01-24T00:54:34.464501327Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:34.467102 containerd[1458]: time="2026-01-24T00:54:34.466857471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:54:34.467102 containerd[1458]: time="2026-01-24T00:54:34.466991364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:54:34.467298 kubelet[2497]: E0124 00:54:34.467117 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:54:34.467298 kubelet[2497]: E0124 00:54:34.467156 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:54:34.467298 kubelet[2497]: E0124 00:54:34.467222 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5df7ff5fdf-zht4d_calico-system(3c68c08e-36da-4f47-947e-e20fabb43d39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:34.467417 kubelet[2497]: E0124 00:54:34.467261 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df7ff5fdf-zht4d" podUID="3c68c08e-36da-4f47-947e-e20fabb43d39" Jan 24 00:54:36.319028 containerd[1458]: time="2026-01-24T00:54:36.318855701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:54:36.389994 containerd[1458]: time="2026-01-24T00:54:36.389779735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:36.392006 containerd[1458]: time="2026-01-24T00:54:36.391864298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:54:36.392156 containerd[1458]: time="2026-01-24T00:54:36.391924797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:54:36.392364 kubelet[2497]: E0124 00:54:36.392297 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:54:36.392683 kubelet[2497]: E0124 00:54:36.392362 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:54:36.392683 kubelet[2497]: E0124 00:54:36.392451 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598df9794d-p5z6d_calico-system(80bbd4c4-8103-4c2d-b518-8fb02d9e29a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:36.392683 kubelet[2497]: E0124 00:54:36.392485 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:40.321336 containerd[1458]: time="2026-01-24T00:54:40.321050683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:54:40.395661 containerd[1458]: time="2026-01-24T00:54:40.395562997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:40.397544 containerd[1458]: time="2026-01-24T00:54:40.397472341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:54:40.397620 containerd[1458]: time="2026-01-24T00:54:40.397567256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:54:40.397925 kubelet[2497]: E0124 00:54:40.397810 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:54:40.397925 kubelet[2497]: E0124 00:54:40.397894 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:54:40.398405 kubelet[2497]: E0124 00:54:40.398178 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8plzs_calico-system(549ae7b9-2710-43b2-acf2-03007d90bb7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:40.398405 kubelet[2497]: E0124 00:54:40.398222 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:40.399078 containerd[1458]: time="2026-01-24T00:54:40.398892676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:54:40.471369 containerd[1458]: time="2026-01-24T00:54:40.471275504Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:40.472906 containerd[1458]: time="2026-01-24T00:54:40.472747926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:54:40.472906 containerd[1458]: time="2026-01-24T00:54:40.472791368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:54:40.473265 kubelet[2497]: E0124 00:54:40.473157 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:54:40.473265 kubelet[2497]: E0124 00:54:40.473204 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:54:40.473422 kubelet[2497]: E0124 00:54:40.473402 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:40.473684 containerd[1458]: time="2026-01-24T00:54:40.473643394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:54:40.535819 containerd[1458]: time="2026-01-24T00:54:40.535736037Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:40.537627 containerd[1458]: time="2026-01-24T00:54:40.537509296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:54:40.537734 containerd[1458]: time="2026-01-24T00:54:40.537563963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:54:40.538054 kubelet[2497]: E0124 00:54:40.537888 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:40.538161 kubelet[2497]: E0124 00:54:40.538063 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:40.538421 kubelet[2497]: E0124 00:54:40.538338 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f99fd9549-84j78_calico-apiserver(2c115e27-4279-4a97-b3f2-127b5b368e0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:40.538495 kubelet[2497]: E0124 00:54:40.538410 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:40.538599 containerd[1458]: time="2026-01-24T00:54:40.538519526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:54:40.609114 containerd[1458]: time="2026-01-24T00:54:40.608821764Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:40.610719 containerd[1458]: time="2026-01-24T00:54:40.610561821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:54:40.610719 containerd[1458]: time="2026-01-24T00:54:40.610653603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:54:40.610877 kubelet[2497]: E0124 00:54:40.610799 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:40.610877 kubelet[2497]: E0124 00:54:40.610849 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:54:40.611253 kubelet[2497]: E0124 00:54:40.611213 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f99fd9549-l6xtt_calico-apiserver(96853c29-f6ae-4323-a4cd-7dc7ec0a8a17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:40.611353 kubelet[2497]: E0124 00:54:40.611271 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:40.611471 containerd[1458]: time="2026-01-24T00:54:40.611352176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:54:40.669024 containerd[1458]: time="2026-01-24T00:54:40.668869877Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:54:40.670435 containerd[1458]: time="2026-01-24T00:54:40.670346366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:54:40.670647 containerd[1458]: time="2026-01-24T00:54:40.670437490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:54:40.670706 kubelet[2497]: E0124 00:54:40.670632 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:54:40.670706 kubelet[2497]: E0124 00:54:40.670673 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:54:40.670911 kubelet[2497]: E0124 00:54:40.670756 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:54:40.670911 kubelet[2497]: E0124 00:54:40.670816 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:42.738205 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:57246.service - OpenSSH per-connection server daemon (10.0.0.1:57246). Jan 24 00:54:42.803501 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 57246 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:42.807628 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:42.813870 systemd-logind[1439]: New session 8 of user core. Jan 24 00:54:42.824205 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:54:42.973731 sshd[5001]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:42.978811 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:57246.service: Deactivated successfully. Jan 24 00:54:42.981393 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:54:42.984599 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:54:42.986422 systemd-logind[1439]: Removed session 8. Jan 24 00:54:44.302889 containerd[1458]: time="2026-01-24T00:54:44.302800472Z" level=info msg="StopPodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\"" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.357 [WARNING][5026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mknhf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"067d9f5c-5021-4e9c-bbcc-c8666caf180f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48", Pod:"coredns-66bc5c9577-mknhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84ac12955", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.358 [INFO][5026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.358 [INFO][5026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" iface="eth0" netns="" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.358 [INFO][5026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.358 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.393 [INFO][5037] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.393 [INFO][5037] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.393 [INFO][5037] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.402 [WARNING][5037] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.402 [INFO][5037] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.405 [INFO][5037] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:44.411744 containerd[1458]: 2026-01-24 00:54:44.408 [INFO][5026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.412513 containerd[1458]: time="2026-01-24T00:54:44.411794934Z" level=info msg="TearDown network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" successfully" Jan 24 00:54:44.412513 containerd[1458]: time="2026-01-24T00:54:44.411821204Z" level=info msg="StopPodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" returns successfully" Jan 24 00:54:44.412840 containerd[1458]: time="2026-01-24T00:54:44.412798387Z" level=info msg="RemovePodSandbox for \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\"" Jan 24 00:54:44.415141 containerd[1458]: time="2026-01-24T00:54:44.415040962Z" level=info msg="Forcibly stopping sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\"" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.466 [WARNING][5056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mknhf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"067d9f5c-5021-4e9c-bbcc-c8666caf180f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9219cf98618308b63207ccd5b143897c9108f83543422c8b79425e71a22c5c48", Pod:"coredns-66bc5c9577-mknhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc84ac12955", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.466 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.466 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" iface="eth0" netns="" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.466 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.466 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.493 [INFO][5065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.493 [INFO][5065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.493 [INFO][5065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.503 [WARNING][5065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.503 [INFO][5065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" HandleID="k8s-pod-network.7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Workload="localhost-k8s-coredns--66bc5c9577--mknhf-eth0" Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.505 [INFO][5065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:44.511537 containerd[1458]: 2026-01-24 00:54:44.508 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477" Jan 24 00:54:44.512283 containerd[1458]: time="2026-01-24T00:54:44.511600872Z" level=info msg="TearDown network for sandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" successfully" Jan 24 00:54:44.518348 containerd[1458]: time="2026-01-24T00:54:44.518246312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:44.518487 containerd[1458]: time="2026-01-24T00:54:44.518365244Z" level=info msg="RemovePodSandbox \"7c621ef7e71c5746e47e0ec6a71af1d2916deccdb60be79adde7696d6f5aa477\" returns successfully" Jan 24 00:54:44.519354 containerd[1458]: time="2026-01-24T00:54:44.519292820Z" level=info msg="StopPodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\"" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.569 [WARNING][5083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17", Pod:"calico-apiserver-5f99fd9549-l6xtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95e4badb67f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.570 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.570 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" iface="eth0" netns="" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.570 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.570 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.601 [INFO][5091] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.601 [INFO][5091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.601 [INFO][5091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.610 [WARNING][5091] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.610 [INFO][5091] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.612 [INFO][5091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:44.620163 containerd[1458]: 2026-01-24 00:54:44.615 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.620163 containerd[1458]: time="2026-01-24T00:54:44.619867555Z" level=info msg="TearDown network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" successfully" Jan 24 00:54:44.620163 containerd[1458]: time="2026-01-24T00:54:44.619904765Z" level=info msg="StopPodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" returns successfully" Jan 24 00:54:44.620736 containerd[1458]: time="2026-01-24T00:54:44.620409515Z" level=info msg="RemovePodSandbox for \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\"" Jan 24 00:54:44.620736 containerd[1458]: time="2026-01-24T00:54:44.620448498Z" level=info msg="Forcibly stopping sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\"" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.666 [WARNING][5109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"96853c29-f6ae-4323-a4cd-7dc7ec0a8a17", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0112db8a048a7cbfac4fa79be5d65bff9a7993c12fb20d18796610eefd8d2b17", Pod:"calico-apiserver-5f99fd9549-l6xtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95e4badb67f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.666 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.666 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" iface="eth0" netns="" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.666 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.666 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.697 [INFO][5118] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.697 [INFO][5118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.697 [INFO][5118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.705 [WARNING][5118] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.705 [INFO][5118] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" HandleID="k8s-pod-network.1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Workload="localhost-k8s-calico--apiserver--5f99fd9549--l6xtt-eth0" Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.708 [INFO][5118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:44.714666 containerd[1458]: 2026-01-24 00:54:44.710 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397" Jan 24 00:54:44.715432 containerd[1458]: time="2026-01-24T00:54:44.714696900Z" level=info msg="TearDown network for sandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" successfully" Jan 24 00:54:44.720660 containerd[1458]: time="2026-01-24T00:54:44.720506544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:44.720660 containerd[1458]: time="2026-01-24T00:54:44.720602985Z" level=info msg="RemovePodSandbox \"1cac253725394e9c0f397e7aeef0511bb0e01199f0560997134f1a7a84585397\" returns successfully" Jan 24 00:54:44.721445 containerd[1458]: time="2026-01-24T00:54:44.721367706Z" level=info msg="StopPodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\"" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.765 [WARNING][5136] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cmnzx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa4fe547-535b-479c-9c29-60c4ee40c975", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456", Pod:"csi-node-driver-cmnzx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califc0c874807d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.765 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.765 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" iface="eth0" netns="" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.765 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.765 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.793 [INFO][5145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.793 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.793 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.802 [WARNING][5145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.802 [INFO][5145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.804 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:44.810259 containerd[1458]: 2026-01-24 00:54:44.807 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.810259 containerd[1458]: time="2026-01-24T00:54:44.810236573Z" level=info msg="TearDown network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" successfully" Jan 24 00:54:44.810259 containerd[1458]: time="2026-01-24T00:54:44.810266599Z" level=info msg="StopPodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" returns successfully" Jan 24 00:54:44.811064 containerd[1458]: time="2026-01-24T00:54:44.810856334Z" level=info msg="RemovePodSandbox for \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\"" Jan 24 00:54:44.811064 containerd[1458]: time="2026-01-24T00:54:44.810888675Z" level=info msg="Forcibly stopping sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\"" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.862 [WARNING][5162] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cmnzx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa4fe547-535b-479c-9c29-60c4ee40c975", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af931213d7eccaadbd3d41be51b282f0a99b0d7e19cb0b9a1a0aa3845c6cb456", Pod:"csi-node-driver-cmnzx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califc0c874807d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.862 [INFO][5162] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.862 [INFO][5162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" iface="eth0" netns="" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.862 [INFO][5162] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.862 [INFO][5162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.892 [INFO][5172] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.892 [INFO][5172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.892 [INFO][5172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.900 [WARNING][5172] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.900 [INFO][5172] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" HandleID="k8s-pod-network.c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Workload="localhost-k8s-csi--node--driver--cmnzx-eth0" Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.903 [INFO][5172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:44.909259 containerd[1458]: 2026-01-24 00:54:44.906 [INFO][5162] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694" Jan 24 00:54:44.909259 containerd[1458]: time="2026-01-24T00:54:44.909161232Z" level=info msg="TearDown network for sandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" successfully" Jan 24 00:54:44.916492 containerd[1458]: time="2026-01-24T00:54:44.916422214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:44.916672 containerd[1458]: time="2026-01-24T00:54:44.916521950Z" level=info msg="RemovePodSandbox \"c99c2894ad74d56ec2a34f2a0bf3c8356b4c9ec5f8ad663fd80509840fded694\" returns successfully" Jan 24 00:54:44.917282 containerd[1458]: time="2026-01-24T00:54:44.917243632Z" level=info msg="StopPodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\"" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.960 [WARNING][5188] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" WorkloadEndpoint="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.960 [INFO][5188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.960 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" iface="eth0" netns="" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.960 [INFO][5188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.960 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.987 [INFO][5197] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.988 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.988 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.994 [WARNING][5197] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.994 [INFO][5197] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.997 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.002361 containerd[1458]: 2026-01-24 00:54:44.999 [INFO][5188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.002693 containerd[1458]: time="2026-01-24T00:54:45.002400764Z" level=info msg="TearDown network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" successfully" Jan 24 00:54:45.002693 containerd[1458]: time="2026-01-24T00:54:45.002437583Z" level=info msg="StopPodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" returns successfully" Jan 24 00:54:45.003406 containerd[1458]: time="2026-01-24T00:54:45.003358654Z" level=info msg="RemovePodSandbox for \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\"" Jan 24 00:54:45.003479 containerd[1458]: time="2026-01-24T00:54:45.003408607Z" level=info msg="Forcibly stopping sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\"" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.044 [WARNING][5215] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" WorkloadEndpoint="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.044 [INFO][5215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.044 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" iface="eth0" netns="" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.044 [INFO][5215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.044 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.068 [INFO][5224] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.068 [INFO][5224] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.068 [INFO][5224] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.076 [WARNING][5224] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.076 [INFO][5224] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" HandleID="k8s-pod-network.cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Workload="localhost-k8s-whisker--56d98d76cc--sj5nk-eth0" Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.078 [INFO][5224] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.084200 containerd[1458]: 2026-01-24 00:54:45.081 [INFO][5215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d" Jan 24 00:54:45.084664 containerd[1458]: time="2026-01-24T00:54:45.084238485Z" level=info msg="TearDown network for sandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" successfully" Jan 24 00:54:45.090195 containerd[1458]: time="2026-01-24T00:54:45.090098699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:45.090195 containerd[1458]: time="2026-01-24T00:54:45.090168329Z" level=info msg="RemovePodSandbox \"cf6a0be9997d18bc8018a640f01c8164cb4a0f2818a3d2366dac17241246966d\" returns successfully" Jan 24 00:54:45.090953 containerd[1458]: time="2026-01-24T00:54:45.090891678Z" level=info msg="StopPodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\"" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.137 [WARNING][5241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x44ds-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415", Pod:"coredns-66bc5c9577-x44ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3caae14ad9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.137 [INFO][5241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.137 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" iface="eth0" netns="" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.137 [INFO][5241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.137 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.165 [INFO][5250] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.165 [INFO][5250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.165 [INFO][5250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.173 [WARNING][5250] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.173 [INFO][5250] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.175 [INFO][5250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.181571 containerd[1458]: 2026-01-24 00:54:45.178 [INFO][5241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.182286 containerd[1458]: time="2026-01-24T00:54:45.181535292Z" level=info msg="TearDown network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" successfully" Jan 24 00:54:45.182286 containerd[1458]: time="2026-01-24T00:54:45.181615442Z" level=info msg="StopPodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" returns successfully" Jan 24 00:54:45.182854 containerd[1458]: time="2026-01-24T00:54:45.182814489Z" level=info msg="RemovePodSandbox for \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\"" Jan 24 00:54:45.182979 containerd[1458]: time="2026-01-24T00:54:45.182862929Z" level=info msg="Forcibly stopping sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\"" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.227 [WARNING][5268] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x44ds-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"aa4858bc-37d8-46bd-8eaf-fa6b9172ea0f", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed08d3c6337e7525dfad8dccbe50ab28e98a231fe4c83ff0d67f9ca5406ce415", Pod:"coredns-66bc5c9577-x44ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3caae14ad9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.227 [INFO][5268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.227 [INFO][5268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" iface="eth0" netns="" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.228 [INFO][5268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.228 [INFO][5268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.258 [INFO][5277] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.258 [INFO][5277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.258 [INFO][5277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.266 [WARNING][5277] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.266 [INFO][5277] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" HandleID="k8s-pod-network.c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Workload="localhost-k8s-coredns--66bc5c9577--x44ds-eth0" Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.268 [INFO][5277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.274672 containerd[1458]: 2026-01-24 00:54:45.271 [INFO][5268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127" Jan 24 00:54:45.275252 containerd[1458]: time="2026-01-24T00:54:45.274734073Z" level=info msg="TearDown network for sandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" successfully" Jan 24 00:54:45.280147 containerd[1458]: time="2026-01-24T00:54:45.280099833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:45.280196 containerd[1458]: time="2026-01-24T00:54:45.280175775Z" level=info msg="RemovePodSandbox \"c7ca9d346b674f03594292da9d0cbcd3457b939ef4c71e409b894782f8a43127\" returns successfully" Jan 24 00:54:45.280860 containerd[1458]: time="2026-01-24T00:54:45.280816311Z" level=info msg="StopPodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\"" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.326 [WARNING][5296] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8plzs-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"549ae7b9-2710-43b2-acf2-03007d90bb7e", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018", Pod:"goldmane-7c778bb748-8plzs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif7d5065c442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.326 [INFO][5296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.327 [INFO][5296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" iface="eth0" netns="" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.327 [INFO][5296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.327 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.353 [INFO][5305] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.353 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.353 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.363 [WARNING][5305] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.363 [INFO][5305] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.365 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.371500 containerd[1458]: 2026-01-24 00:54:45.368 [INFO][5296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.372608 containerd[1458]: time="2026-01-24T00:54:45.371522568Z" level=info msg="TearDown network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" successfully" Jan 24 00:54:45.372608 containerd[1458]: time="2026-01-24T00:54:45.371553977Z" level=info msg="StopPodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" returns successfully" Jan 24 00:54:45.372608 containerd[1458]: time="2026-01-24T00:54:45.372285153Z" level=info msg="RemovePodSandbox for \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\"" Jan 24 00:54:45.372608 containerd[1458]: time="2026-01-24T00:54:45.372317955Z" level=info msg="Forcibly stopping sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\"" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.422 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8plzs-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"549ae7b9-2710-43b2-acf2-03007d90bb7e", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615928e4bd44b68ba6eb3114db4ab7e72f48279bc96a8310df5e67ced1d64018", Pod:"goldmane-7c778bb748-8plzs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif7d5065c442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.423 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.423 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" iface="eth0" netns="" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.423 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.423 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.659 [INFO][5331] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.659 [INFO][5331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.659 [INFO][5331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.691 [WARNING][5331] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.691 [INFO][5331] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" HandleID="k8s-pod-network.824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Workload="localhost-k8s-goldmane--7c778bb748--8plzs-eth0" Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.693 [INFO][5331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.709266 containerd[1458]: 2026-01-24 00:54:45.706 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e" Jan 24 00:54:45.709638 containerd[1458]: time="2026-01-24T00:54:45.709311314Z" level=info msg="TearDown network for sandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" successfully" Jan 24 00:54:45.714244 containerd[1458]: time="2026-01-24T00:54:45.714168164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:45.714244 containerd[1458]: time="2026-01-24T00:54:45.714243433Z" level=info msg="RemovePodSandbox \"824bd42af892d6f34c926dd631aa726ef9a7b1370146b0f169e3acd8bdffd24e\" returns successfully" Jan 24 00:54:45.715131 containerd[1458]: time="2026-01-24T00:54:45.714920842Z" level=info msg="StopPodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\"" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.764 [WARNING][5349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0", GenerateName:"calico-kube-controllers-598df9794d-", Namespace:"calico-system", SelfLink:"", UID:"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598df9794d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3", Pod:"calico-kube-controllers-598df9794d-p5z6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11126ef0e39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.764 [INFO][5349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.764 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" iface="eth0" netns="" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.764 [INFO][5349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.764 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.787 [INFO][5358] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.787 [INFO][5358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.787 [INFO][5358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.794 [WARNING][5358] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.794 [INFO][5358] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.796 [INFO][5358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.802894 containerd[1458]: 2026-01-24 00:54:45.799 [INFO][5349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.803607 containerd[1458]: time="2026-01-24T00:54:45.802925916Z" level=info msg="TearDown network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" successfully" Jan 24 00:54:45.803607 containerd[1458]: time="2026-01-24T00:54:45.802990537Z" level=info msg="StopPodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" returns successfully" Jan 24 00:54:45.804175 containerd[1458]: time="2026-01-24T00:54:45.804107607Z" level=info msg="RemovePodSandbox for \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\"" Jan 24 00:54:45.804284 containerd[1458]: time="2026-01-24T00:54:45.804185752Z" level=info msg="Forcibly stopping sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\"" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.846 [WARNING][5375] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0", GenerateName:"calico-kube-controllers-598df9794d-", Namespace:"calico-system", SelfLink:"", UID:"80bbd4c4-8103-4c2d-b518-8fb02d9e29a2", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598df9794d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6caeefd09ff4f2eb42287b22504070bca69563ae89605bf06597f3f170cfd3", Pod:"calico-kube-controllers-598df9794d-p5z6d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11126ef0e39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.846 [INFO][5375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.846 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" iface="eth0" netns="" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.846 [INFO][5375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.846 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.871 [INFO][5384] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.871 [INFO][5384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.871 [INFO][5384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.879 [WARNING][5384] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.879 [INFO][5384] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" HandleID="k8s-pod-network.d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Workload="localhost-k8s-calico--kube--controllers--598df9794d--p5z6d-eth0" Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.881 [INFO][5384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.886046 containerd[1458]: 2026-01-24 00:54:45.883 [INFO][5375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502" Jan 24 00:54:45.886523 containerd[1458]: time="2026-01-24T00:54:45.886091461Z" level=info msg="TearDown network for sandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" successfully" Jan 24 00:54:45.890215 containerd[1458]: time="2026-01-24T00:54:45.890187444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:45.890260 containerd[1458]: time="2026-01-24T00:54:45.890229603Z" level=info msg="RemovePodSandbox \"d357f5b453a87fd3890ec566bb8c79a4f4b3d4a1cf7eb12e1947d2535ea7e502\" returns successfully" Jan 24 00:54:45.890737 containerd[1458]: time="2026-01-24T00:54:45.890689945Z" level=info msg="StopPodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\"" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.923 [WARNING][5404] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c115e27-4279-4a97-b3f2-127b5b368e0a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b", Pod:"calico-apiserver-5f99fd9549-84j78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie014acedbd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.924 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.924 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" iface="eth0" netns="" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.924 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.924 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.951 [INFO][5413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.951 [INFO][5413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.951 [INFO][5413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.958 [WARNING][5413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.958 [INFO][5413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.960 [INFO][5413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:45.966924 containerd[1458]: 2026-01-24 00:54:45.964 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:45.966924 containerd[1458]: time="2026-01-24T00:54:45.966819730Z" level=info msg="TearDown network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" successfully" Jan 24 00:54:45.966924 containerd[1458]: time="2026-01-24T00:54:45.966858242Z" level=info msg="StopPodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" returns successfully" Jan 24 00:54:45.967742 containerd[1458]: time="2026-01-24T00:54:45.967497841Z" level=info msg="RemovePodSandbox for \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\"" Jan 24 00:54:45.967742 containerd[1458]: time="2026-01-24T00:54:45.967529810Z" level=info msg="Forcibly stopping sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\"" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.011 [WARNING][5430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0", GenerateName:"calico-apiserver-5f99fd9549-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c115e27-4279-4a97-b3f2-127b5b368e0a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f99fd9549", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87718e0bf308bb435e9dcefc3ffc64d3c46166e1369265fe1ca1f3a5637ec55b", Pod:"calico-apiserver-5f99fd9549-84j78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie014acedbd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.012 [INFO][5430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.012 [INFO][5430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" iface="eth0" netns="" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.012 [INFO][5430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.013 [INFO][5430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.039 [INFO][5439] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.039 [INFO][5439] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.039 [INFO][5439] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.049 [WARNING][5439] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.049 [INFO][5439] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" HandleID="k8s-pod-network.36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Workload="localhost-k8s-calico--apiserver--5f99fd9549--84j78-eth0" Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.053 [INFO][5439] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:46.058676 containerd[1458]: 2026-01-24 00:54:46.055 [INFO][5430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7" Jan 24 00:54:46.058676 containerd[1458]: time="2026-01-24T00:54:46.058622016Z" level=info msg="TearDown network for sandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" successfully" Jan 24 00:54:46.063387 containerd[1458]: time="2026-01-24T00:54:46.063309819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:54:46.063468 containerd[1458]: time="2026-01-24T00:54:46.063396861Z" level=info msg="RemovePodSandbox \"36d15861fb73e2e6a428afcb533d52d2f1d781da47fcea6a7e9efb4e053a4fa7\" returns successfully" Jan 24 00:54:47.990271 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:48954.service - OpenSSH per-connection server daemon (10.0.0.1:48954). Jan 24 00:54:48.037811 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 48954 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:48.039872 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:48.045500 systemd-logind[1439]: New session 9 of user core. Jan 24 00:54:48.053151 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:54:48.213548 sshd[5449]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:48.218320 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:48954.service: Deactivated successfully. Jan 24 00:54:48.221715 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:54:48.222680 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:54:48.224373 systemd-logind[1439]: Removed session 9. Jan 24 00:54:48.321672 kubelet[2497]: E0124 00:54:48.321556 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:54:49.318766 kubelet[2497]: E0124 00:54:49.318616 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df7ff5fdf-zht4d" podUID="3c68c08e-36da-4f47-947e-e20fabb43d39" Jan 24 00:54:50.608133 kubelet[2497]: E0124 00:54:50.608017 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:52.318427 kubelet[2497]: E0124 00:54:52.318267 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:54:52.318427 kubelet[2497]: E0124 00:54:52.318289 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:54:52.319225 kubelet[2497]: E0124 00:54:52.319144 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:54:53.225663 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:48966.service - OpenSSH per-connection server daemon (10.0.0.1:48966). Jan 24 00:54:53.264997 sshd[5491]: Accepted publickey for core from 10.0.0.1 port 48966 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:53.266635 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:53.272216 systemd-logind[1439]: New session 10 of user core. Jan 24 00:54:53.286245 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:54:53.407195 sshd[5491]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:53.410465 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:48966.service: Deactivated successfully. Jan 24 00:54:53.413252 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:54:53.415129 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:54:53.416355 systemd-logind[1439]: Removed session 10. Jan 24 00:54:55.318298 kubelet[2497]: E0124 00:54:55.318198 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:54:58.424876 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:39564.service - OpenSSH per-connection server daemon (10.0.0.1:39564). Jan 24 00:54:58.480048 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 39564 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:58.482239 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:58.487790 systemd-logind[1439]: New session 11 of user core. Jan 24 00:54:58.498414 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:54:58.674206 sshd[5507]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:58.685698 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:39564.service: Deactivated successfully. Jan 24 00:54:58.688284 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:54:58.690455 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:54:58.701496 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:39578.service - OpenSSH per-connection server daemon (10.0.0.1:39578). Jan 24 00:54:58.703787 systemd-logind[1439]: Removed session 11. Jan 24 00:54:58.741263 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 39578 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:58.743223 sshd[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:58.752836 systemd-logind[1439]: New session 12 of user core. Jan 24 00:54:58.763293 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:54:58.930631 sshd[5525]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:58.942621 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:39578.service: Deactivated successfully. Jan 24 00:54:58.945416 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:54:58.950072 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:54:58.964571 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:39580.service - OpenSSH per-connection server daemon (10.0.0.1:39580). Jan 24 00:54:58.967161 systemd-logind[1439]: Removed session 12. Jan 24 00:54:59.007977 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 39580 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:59.010295 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:59.016900 systemd-logind[1439]: New session 13 of user core. Jan 24 00:54:59.024172 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:54:59.159746 sshd[5539]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:59.164066 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:39580.service: Deactivated successfully. Jan 24 00:54:59.166182 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:54:59.167273 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:54:59.168627 systemd-logind[1439]: Removed session 13. Jan 24 00:55:02.319268 containerd[1458]: time="2026-01-24T00:55:02.318829728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:55:02.398220 containerd[1458]: time="2026-01-24T00:55:02.398135418Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:02.399614 containerd[1458]: time="2026-01-24T00:55:02.399536074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:55:02.399691 containerd[1458]: time="2026-01-24T00:55:02.399585156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:55:02.399982 kubelet[2497]: E0124 00:55:02.399834 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:55:02.400405 kubelet[2497]: E0124 00:55:02.399919 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:55:02.400405 kubelet[2497]: E0124 00:55:02.400162 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598df9794d-p5z6d_calico-system(80bbd4c4-8103-4c2d-b518-8fb02d9e29a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:02.400405 kubelet[2497]: E0124 00:55:02.400217 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:55:03.318568 containerd[1458]: time="2026-01-24T00:55:03.318427684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:55:03.386017 containerd[1458]: time="2026-01-24T00:55:03.385852358Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:03.387683 containerd[1458]: time="2026-01-24T00:55:03.387639765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:55:03.387969 containerd[1458]: time="2026-01-24T00:55:03.387697482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:55:03.388164 kubelet[2497]: E0124 00:55:03.388058 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:55:03.388164 kubelet[2497]: E0124 00:55:03.388158 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:55:03.388352 kubelet[2497]: E0124 00:55:03.388217 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:03.389681 containerd[1458]: time="2026-01-24T00:55:03.389523762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:55:03.465807 containerd[1458]: time="2026-01-24T00:55:03.465615693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:03.467154 containerd[1458]: time="2026-01-24T00:55:03.467082066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:55:03.467288 containerd[1458]: time="2026-01-24T00:55:03.467200296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:55:03.467481 kubelet[2497]: E0124 00:55:03.467414 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:55:03.467481 kubelet[2497]: E0124 00:55:03.467470 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:55:03.468270 kubelet[2497]: E0124 00:55:03.467547 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cmnzx_calico-system(fa4fe547-535b-479c-9c29-60c4ee40c975): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:03.468270 kubelet[2497]: E0124 00:55:03.467591 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:55:04.176849 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:39592.service - OpenSSH per-connection server daemon (10.0.0.1:39592). Jan 24 00:55:04.230969 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 39592 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:04.233559 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:04.239798 systemd-logind[1439]: New session 14 of user core. Jan 24 00:55:04.248254 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:55:04.323149 containerd[1458]: time="2026-01-24T00:55:04.322813243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:55:04.389444 sshd[5559]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:04.396203 containerd[1458]: time="2026-01-24T00:55:04.395907268Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:04.397723 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:39592.service: Deactivated successfully. Jan 24 00:55:04.398059 containerd[1458]: time="2026-01-24T00:55:04.397738153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:55:04.398059 containerd[1458]: time="2026-01-24T00:55:04.397820314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:55:04.398118 kubelet[2497]: E0124 00:55:04.398068 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:55:04.398199 kubelet[2497]: E0124 00:55:04.398115 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:55:04.398329 kubelet[2497]: E0124 00:55:04.398286 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5df7ff5fdf-zht4d_calico-system(3c68c08e-36da-4f47-947e-e20fabb43d39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:04.399611 containerd[1458]: time="2026-01-24T00:55:04.399070111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:55:04.402098 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:55:04.403300 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:55:04.410424 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:52702.service - OpenSSH per-connection server daemon (10.0.0.1:52702). Jan 24 00:55:04.413229 systemd-logind[1439]: Removed session 14. Jan 24 00:55:04.451596 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 52702 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:04.454035 sshd[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:04.460817 systemd-logind[1439]: New session 15 of user core. Jan 24 00:55:04.465780 containerd[1458]: time="2026-01-24T00:55:04.465693610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:04.466385 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:55:04.467236 containerd[1458]: time="2026-01-24T00:55:04.467144850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:55:04.467236 containerd[1458]: time="2026-01-24T00:55:04.467181336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:55:04.467499 kubelet[2497]: E0124 00:55:04.467448 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:55:04.467783 kubelet[2497]: E0124 00:55:04.467506 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:55:04.467783 kubelet[2497]: E0124 00:55:04.467649 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8plzs_calico-system(549ae7b9-2710-43b2-acf2-03007d90bb7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:04.467783 kubelet[2497]: E0124 00:55:04.467678 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:55:04.468747 containerd[1458]: time="2026-01-24T00:55:04.468477610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:55:04.541012 containerd[1458]: time="2026-01-24T00:55:04.540774401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:04.542349 containerd[1458]: time="2026-01-24T00:55:04.542241381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:55:04.542349 containerd[1458]: time="2026-01-24T00:55:04.542330523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:55:04.542637 kubelet[2497]: E0124 00:55:04.542578 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:55:04.542682 kubelet[2497]: E0124 00:55:04.542650 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:55:04.542824 kubelet[2497]: E0124 00:55:04.542769 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5df7ff5fdf-zht4d_calico-system(3c68c08e-36da-4f47-947e-e20fabb43d39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:04.542880 kubelet[2497]: E0124 00:55:04.542834 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df7ff5fdf-zht4d" podUID="3c68c08e-36da-4f47-947e-e20fabb43d39" Jan 24 00:55:04.747123 sshd[5573]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:04.755124 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:52702.service: Deactivated successfully. Jan 24 00:55:04.757327 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:55:04.758874 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:55:04.767479 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:52704.service - OpenSSH per-connection server daemon (10.0.0.1:52704). Jan 24 00:55:04.768890 systemd-logind[1439]: Removed session 15. Jan 24 00:55:04.823444 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 52704 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:04.825252 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:04.831164 systemd-logind[1439]: New session 16 of user core. Jan 24 00:55:04.842255 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:55:05.501229 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:05.512245 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:52704.service: Deactivated successfully. Jan 24 00:55:05.514828 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:55:05.522121 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:55:05.534877 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:52706.service - OpenSSH per-connection server daemon (10.0.0.1:52706). Jan 24 00:55:05.539072 systemd-logind[1439]: Removed session 16. Jan 24 00:55:05.576797 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 52706 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:05.579839 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:05.588055 systemd-logind[1439]: New session 17 of user core. Jan 24 00:55:05.595470 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:55:05.908308 sshd[5603]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:05.918671 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:52706.service: Deactivated successfully. Jan 24 00:55:05.929832 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:55:05.937138 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:55:05.946760 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:52710.service - OpenSSH per-connection server daemon (10.0.0.1:52710). Jan 24 00:55:05.949707 systemd-logind[1439]: Removed session 17. Jan 24 00:55:06.020017 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 52710 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:06.020429 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:06.028300 systemd-logind[1439]: New session 18 of user core. Jan 24 00:55:06.036214 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:55:06.193443 sshd[5615]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:06.198278 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:52710.service: Deactivated successfully. Jan 24 00:55:06.200775 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:55:06.202768 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:55:06.204299 systemd-logind[1439]: Removed session 18. Jan 24 00:55:06.318160 kubelet[2497]: E0124 00:55:06.316918 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:06.321115 containerd[1458]: time="2026-01-24T00:55:06.320771084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:55:06.393764 containerd[1458]: time="2026-01-24T00:55:06.393692261Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:06.395083 containerd[1458]: time="2026-01-24T00:55:06.395030702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:55:06.395236 containerd[1458]: time="2026-01-24T00:55:06.395107142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:55:06.395501 kubelet[2497]: E0124 00:55:06.395419 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:55:06.395594 kubelet[2497]: E0124 00:55:06.395510 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:55:06.395697 kubelet[2497]: E0124 00:55:06.395646 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f99fd9549-84j78_calico-apiserver(2c115e27-4279-4a97-b3f2-127b5b368e0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:06.395828 kubelet[2497]: E0124 00:55:06.395723 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:55:10.319788 containerd[1458]: time="2026-01-24T00:55:10.319464151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:55:10.411443 containerd[1458]: time="2026-01-24T00:55:10.411297549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:10.413027 containerd[1458]: time="2026-01-24T00:55:10.412862993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:55:10.413163 containerd[1458]: time="2026-01-24T00:55:10.413022288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:55:10.413333 kubelet[2497]: E0124 00:55:10.413276 2497 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:55:10.413674 kubelet[2497]: E0124 00:55:10.413330 2497 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:55:10.413674 kubelet[2497]: E0124 00:55:10.413416 2497 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f99fd9549-l6xtt_calico-apiserver(96853c29-f6ae-4323-a4cd-7dc7ec0a8a17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:10.413674 kubelet[2497]: E0124 00:55:10.413445 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17" Jan 24 00:55:11.214052 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:52720.service - OpenSSH per-connection server daemon (10.0.0.1:52720). Jan 24 00:55:11.252825 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 52720 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:11.254479 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:11.259387 systemd-logind[1439]: New session 19 of user core. Jan 24 00:55:11.269144 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:55:11.388617 sshd[5635]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:11.391924 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:52720.service: Deactivated successfully. Jan 24 00:55:11.395522 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:55:11.398067 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:55:11.399433 systemd-logind[1439]: Removed session 19. Jan 24 00:55:13.317762 kubelet[2497]: E0124 00:55:13.317687 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:15.318494 kubelet[2497]: E0124 00:55:15.318427 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cmnzx" podUID="fa4fe547-535b-479c-9c29-60c4ee40c975" Jan 24 00:55:16.318099 kubelet[2497]: E0124 00:55:16.317742 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598df9794d-p5z6d" podUID="80bbd4c4-8103-4c2d-b518-8fb02d9e29a2" Jan 24 00:55:16.401834 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Jan 24 00:55:16.458029 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:16.460171 sshd[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:16.465479 systemd-logind[1439]: New session 20 of user core. Jan 24 00:55:16.475124 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:55:16.608832 sshd[5649]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:16.613449 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:45974.service: Deactivated successfully. Jan 24 00:55:16.616014 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:55:16.616858 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:55:16.618421 systemd-logind[1439]: Removed session 20. Jan 24 00:55:17.317791 kubelet[2497]: E0124 00:55:17.317756 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:17.318572 kubelet[2497]: E0124 00:55:17.318478 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df7ff5fdf-zht4d" podUID="3c68c08e-36da-4f47-947e-e20fabb43d39" Jan 24 00:55:18.319748 kubelet[2497]: E0124 00:55:18.319555 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8plzs" podUID="549ae7b9-2710-43b2-acf2-03007d90bb7e" Jan 24 00:55:19.317489 kubelet[2497]: E0124 00:55:19.317443 2497 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:19.318347 kubelet[2497]: E0124 00:55:19.318281 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-84j78" podUID="2c115e27-4279-4a97-b3f2-127b5b368e0a" Jan 24 00:55:21.628022 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:45986.service - OpenSSH per-connection server daemon (10.0.0.1:45986). Jan 24 00:55:21.685330 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 45986 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:55:21.687071 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:21.692223 systemd-logind[1439]: New session 21 of user core. Jan 24 00:55:21.698120 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:55:21.821764 sshd[5689]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:21.825816 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:45986.service: Deactivated successfully. Jan 24 00:55:21.828054 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:55:21.829097 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:55:21.831136 systemd-logind[1439]: Removed session 21. Jan 24 00:55:23.320658 kubelet[2497]: E0124 00:55:23.320509 2497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f99fd9549-l6xtt" podUID="96853c29-f6ae-4323-a4cd-7dc7ec0a8a17"