Jan 24 00:55:55.091523 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:55:55.091544 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:55.091602 kernel: BIOS-provided physical RAM map: Jan 24 00:55:55.091608 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:55:55.091614 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:55:55.091619 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:55:55.091626 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:55:55.091631 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:55:55.091637 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:55:55.091645 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:55:55.091650 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:55:55.091656 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:55:55.091661 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:55:55.091667 kernel: NX (Execute Disable) protection: active Jan 24 00:55:55.091673 kernel: APIC: Static calls initialized Jan 24 00:55:55.091681 kernel: SMBIOS 2.8 present. Jan 24 00:55:55.091687 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:55:55.091693 kernel: Hypervisor detected: KVM Jan 24 00:55:55.091698 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:55:55.091704 kernel: kvm-clock: using sched offset of 4718411358 cycles Jan 24 00:55:55.091710 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:55:55.091744 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:55:55.091751 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:55:55.091757 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:55:55.091763 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:55:55.091772 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:55:55.091778 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:55:55.091784 kernel: Using GB pages for direct mapping Jan 24 00:55:55.091789 kernel: ACPI: Early table checksum verification disabled Jan 24 00:55:55.091795 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:55:55.091801 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091807 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091813 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091821 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:55:55.091827 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091833 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091839 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091845 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:55:55.091850 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:55:55.091856 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:55:55.091866 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:55:55.091874 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:55:55.091880 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:55:55.091887 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:55:55.091893 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:55:55.091899 kernel: No NUMA configuration found Jan 24 00:55:55.091905 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:55:55.091914 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:55:55.091920 kernel: Zone ranges: Jan 24 00:55:55.091926 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:55:55.091932 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:55:55.091938 kernel: Normal empty Jan 24 00:55:55.091944 kernel: Movable zone start for each node Jan 24 00:55:55.091950 kernel: Early memory node ranges Jan 24 00:55:55.091956 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:55:55.091962 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:55:55.091968 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:55:55.091977 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:55:55.091983 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:55:55.091989 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:55:55.091995 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:55:55.092001 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:55:55.092007 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:55:55.092013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:55:55.092020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:55:55.092026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:55:55.092034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:55:55.092040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:55:55.092046 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:55:55.092052 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:55:55.092058 kernel: TSC deadline timer available Jan 24 00:55:55.092065 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:55:55.092071 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:55:55.092077 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:55:55.092083 kernel: kvm-guest: setup PV sched yield Jan 24 00:55:55.092089 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:55:55.092098 kernel: Booting paravirtualized kernel on KVM Jan 24 00:55:55.092104 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:55:55.092110 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:55:55.092117 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:55:55.092123 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:55:55.092129 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:55:55.092135 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:55:55.092141 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:55:55.092148 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:55.092156 kernel: random: crng init done Jan 24 00:55:55.092162 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:55:55.092168 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:55:55.092174 kernel: Fallback order for Node 0: 0 Jan 24 00:55:55.092181 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:55:55.092187 kernel: Policy zone: DMA32 Jan 24 00:55:55.092193 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:55:55.092199 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:55:55.092208 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:55:55.092214 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:55:55.092220 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:55:55.092226 kernel: Dynamic Preempt: voluntary Jan 24 00:55:55.092232 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:55:55.092242 kernel: rcu: RCU event tracing is enabled. Jan 24 00:55:55.092248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:55:55.092255 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:55:55.092261 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:55:55.092269 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:55:55.092275 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:55:55.092281 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:55:55.092288 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:55:55.092294 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:55:55.092300 kernel: Console: colour VGA+ 80x25 Jan 24 00:55:55.092306 kernel: printk: console [ttyS0] enabled Jan 24 00:55:55.092312 kernel: ACPI: Core revision 20230628 Jan 24 00:55:55.092318 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:55:55.092326 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:55:55.092332 kernel: x2apic enabled Jan 24 00:55:55.092339 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:55:55.092345 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:55:55.092351 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:55:55.092357 kernel: kvm-guest: setup PV IPIs Jan 24 00:55:55.092364 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:55:55.092379 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:55:55.092386 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:55:55.092392 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:55:55.092399 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:55:55.092405 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:55:55.092415 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:55:55.092421 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:55:55.092428 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:55:55.092434 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:55:55.092441 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:55:55.092450 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:55:55.092457 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:55:55.092463 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:55:55.092469 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:55:55.092476 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:55:55.092482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:55:55.092489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:55:55.092495 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:55:55.092504 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:55:55.092510 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:55:55.092517 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:55:55.092523 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:55:55.092529 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:55:55.092536 kernel: landlock: Up and running. Jan 24 00:55:55.092542 kernel: SELinux: Initializing. Jan 24 00:55:55.092584 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:55:55.092591 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:55:55.092601 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:55:55.092607 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:55:55.092614 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:55:55.092620 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:55:55.092627 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:55:55.092633 kernel: signal: max sigframe size: 1776 Jan 24 00:55:55.092640 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:55:55.092646 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:55:55.092653 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:55:55.092661 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:55:55.092668 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:55:55.092674 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:55:55.092680 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:55:55.092687 kernel: smpboot: Max logical packages: 1 Jan 24 00:55:55.092693 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:55:55.092700 kernel: devtmpfs: initialized Jan 24 00:55:55.092706 kernel: x86/mm: Memory block size: 128MB Jan 24 00:55:55.092713 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:55:55.092747 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:55:55.092753 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:55:55.092760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:55:55.092766 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:55:55.092773 kernel: audit: type=2000 audit(1769216153.830:1): state=initialized audit_enabled=0 res=1 Jan 24 00:55:55.092779 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:55:55.092785 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:55:55.092792 kernel: cpuidle: using governor menu Jan 24 00:55:55.092798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:55:55.092807 kernel: dca service started, version 1.12.1 Jan 24 00:55:55.092813 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:55:55.092820 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:55:55.092826 kernel: PCI: Using configuration type 1 for base access Jan 24 00:55:55.092833 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:55:55.092839 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:55:55.092846 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:55:55.092852 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:55:55.092858 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:55:55.092867 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:55:55.092873 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:55:55.092880 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:55:55.092886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:55:55.092892 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:55:55.092899 kernel: ACPI: Interpreter enabled Jan 24 00:55:55.092905 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:55:55.092911 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:55:55.092918 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:55:55.092927 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:55:55.092933 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:55:55.092939 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:55:55.093114 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:55:55.093249 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:55:55.093372 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:55:55.093381 kernel: PCI host bridge to bus 0000:00 Jan 24 00:55:55.093506 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:55:55.093671 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:55:55.093823 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:55:55.093933 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:55:55.094039 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:55:55.094154 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:55:55.094299 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:55:55.094477 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:55:55.094703 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:55:55.094902 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:55:55.095032 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:55:55.095150 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:55:55.095268 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:55:55.095405 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:55:55.095524 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:55:55.095768 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:55:55.095892 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:55:55.096018 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:55:55.096136 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:55:55.096252 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:55:55.096374 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:55:55.096499 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:55:55.096691 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:55:55.096849 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:55:55.096968 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:55:55.097083 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:55:55.097205 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:55:55.097328 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:55:55.097450 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:55:55.097669 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:55:55.097826 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:55:55.097953 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:55:55.098069 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:55:55.098083 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:55:55.098090 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:55:55.098097 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:55:55.098103 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:55:55.098110 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:55:55.098116 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:55:55.098122 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:55:55.098130 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:55:55.098136 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:55:55.098145 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:55:55.098151 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:55:55.098158 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:55:55.098164 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:55:55.098170 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:55:55.098177 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:55:55.098183 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:55:55.098190 kernel: iommu: Default domain type: Translated Jan 24 00:55:55.098196 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:55:55.098205 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:55:55.098212 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:55:55.098218 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:55:55.098224 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:55:55.098340 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:55:55.098457 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:55:55.098635 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:55:55.098646 kernel: vgaarb: loaded Jan 24 00:55:55.098652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:55:55.098664 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:55:55.098670 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:55:55.098677 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:55:55.098683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:55:55.098690 kernel: pnp: PnP ACPI init Jan 24 00:55:55.098853 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:55:55.098864 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:55:55.098871 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:55:55.098882 kernel: NET: Registered PF_INET protocol family Jan 24 00:55:55.098889 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:55:55.098895 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:55:55.098902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:55:55.098909 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:55:55.098915 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:55:55.098921 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:55:55.098928 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:55:55.098934 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:55:55.098943 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:55:55.098950 kernel: NET: Registered PF_XDP protocol family Jan 24 00:55:55.099059 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:55:55.099166 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:55:55.099271 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:55:55.099377 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:55:55.099483 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:55:55.099646 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:55:55.099660 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:55:55.099667 kernel: Initialise system trusted keyrings Jan 24 00:55:55.099674 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:55:55.099680 kernel: Key type asymmetric registered Jan 24 00:55:55.099687 kernel: Asymmetric key parser 'x509' registered Jan 24 00:55:55.099693 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:55:55.099700 kernel: io scheduler mq-deadline registered Jan 24 00:55:55.099706 kernel: io scheduler kyber registered Jan 24 00:55:55.099713 kernel: io scheduler bfq registered Jan 24 00:55:55.099755 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:55:55.099762 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:55:55.099769 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:55:55.099776 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:55:55.099782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:55:55.099789 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:55:55.099796 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:55:55.099802 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:55:55.099808 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:55:55.099946 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:55:55.099958 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:55:55.100100 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:55:55.100219 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:55:54 UTC (1769216154) Jan 24 00:55:55.100329 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:55:55.100338 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:55:55.100344 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:55:55.100351 kernel: Segment Routing with IPv6 Jan 24 00:55:55.100362 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:55:55.100368 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:55:55.100375 kernel: Key type dns_resolver registered Jan 24 00:55:55.100382 kernel: IPI shorthand broadcast: enabled Jan 24 00:55:55.100388 kernel: sched_clock: Marking stable (1113042993, 338368668)->(1679055681, -227644020) Jan 24 00:55:55.100395 kernel: registered taskstats version 1 Jan 24 00:55:55.100401 kernel: Loading compiled-in X.509 certificates Jan 24 00:55:55.100408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:55:55.100415 kernel: Key type .fscrypt registered Jan 24 00:55:55.100423 kernel: Key type fscrypt-provisioning registered Jan 24 00:55:55.100430 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:55:55.100437 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:55:55.100443 kernel: ima: No architecture policies found Jan 24 00:55:55.100450 kernel: clk: Disabling unused clocks Jan 24 00:55:55.100457 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:55:55.100463 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:55:55.100470 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:55:55.100476 kernel: Run /init as init process Jan 24 00:55:55.100485 kernel: with arguments: Jan 24 00:55:55.100492 kernel: /init Jan 24 00:55:55.100498 kernel: with environment: Jan 24 00:55:55.100505 kernel: HOME=/ Jan 24 00:55:55.100511 kernel: TERM=linux Jan 24 00:55:55.100519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:55:55.100528 systemd[1]: Detected virtualization kvm. Jan 24 00:55:55.100535 systemd[1]: Detected architecture x86-64. Jan 24 00:55:55.100544 systemd[1]: Running in initrd. Jan 24 00:55:55.100590 systemd[1]: No hostname configured, using default hostname. Jan 24 00:55:55.100597 systemd[1]: Hostname set to . Jan 24 00:55:55.100605 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:55:55.100612 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:55:55.100619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:55:55.100626 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:55:55.100634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:55:55.100644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:55:55.100651 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:55:55.100658 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:55:55.100666 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:55:55.100674 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:55:55.100681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:55:55.100690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:55:55.100697 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:55:55.100704 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:55:55.100711 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:55:55.100754 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:55:55.100764 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:55:55.100771 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:55:55.100781 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:55:55.100788 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:55:55.100797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:55:55.100804 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:55:55.100812 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:55:55.100819 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:55:55.100826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:55:55.100833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:55:55.100842 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:55:55.100850 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:55:55.100857 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:55:55.100864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:55:55.100871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:55.100898 systemd-journald[193]: Collecting audit messages is disabled. Jan 24 00:55:55.100918 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:55:55.100926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:55:55.100934 systemd-journald[193]: Journal started Jan 24 00:55:55.100952 systemd-journald[193]: Runtime Journal (/run/log/journal/bea0ce7cf774405ea1f6007917aecb32) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:55:55.102669 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:55:55.104255 systemd-modules-load[195]: Inserted module 'overlay' Jan 24 00:55:55.232461 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:55:55.232498 kernel: Bridge firewalling registered Jan 24 00:55:55.139685 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 24 00:55:55.231205 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:55:55.233217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:55:55.242053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:55.259813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:55.260755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:55:55.263641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:55:55.266756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:55:55.271381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:55:55.274692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:55:55.286652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:55.303673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:55:55.309932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:55:55.316881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:55:55.340894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:55:55.346212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:55:55.353287 dracut-cmdline[229]: dracut-dracut-053 Jan 24 00:55:55.355660 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:55.400358 systemd-resolved[235]: Positive Trust Anchors: Jan 24 00:55:55.400392 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:55:55.400419 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:55:55.403016 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 24 00:55:55.404201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:55:55.410384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:55:55.495654 kernel: SCSI subsystem initialized Jan 24 00:55:55.508649 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:55:55.527667 kernel: iscsi: registered transport (tcp) Jan 24 00:55:55.552513 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:55:55.552680 kernel: QLogic iSCSI HBA Driver Jan 24 00:55:55.606204 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:55:55.616852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:55:55.652647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:55:55.652715 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:55:55.655511 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:55:55.705663 kernel: raid6: avx2x4 gen() 21575 MB/s Jan 24 00:55:55.725654 kernel: raid6: avx2x2 gen() 23474 MB/s Jan 24 00:55:55.744964 kernel: raid6: avx2x1 gen() 23039 MB/s Jan 24 00:55:55.745031 kernel: raid6: using algorithm avx2x2 gen() 23474 MB/s Jan 24 00:55:55.764924 kernel: raid6: .... xor() 24064 MB/s, rmw enabled Jan 24 00:55:55.765012 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:55:55.787669 kernel: xor: automatically using best checksumming function avx Jan 24 00:55:55.943634 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:55:55.957892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:55:55.975003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:55:55.990368 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 24 00:55:55.995775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:55:56.013924 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:55:56.034119 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 24 00:55:56.079443 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:55:56.097933 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:55:56.180657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:55:56.189794 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:55:56.202116 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:55:56.207715 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:55:56.212673 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:55:56.215846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:55:56.231948 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:55:56.247912 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:55:56.257926 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:55:56.261780 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:55:56.269777 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:55:56.275656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:55:56.292803 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:55:56.292850 kernel: GPT:9289727 != 19775487 Jan 24 00:55:56.292871 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:55:56.292892 kernel: GPT:9289727 != 19775487 Jan 24 00:55:56.292912 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:55:56.292932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:55:56.275817 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:56.296702 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:56.303472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:55:56.306921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:56.313342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:56.322148 kernel: libata version 3.00 loaded. Jan 24 00:55:56.329091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:56.341100 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:55:56.341345 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:55:56.341366 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:55:56.341673 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:55:56.347609 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:55:56.351606 kernel: AES CTR mode by8 optimization enabled Jan 24 00:55:56.351638 kernel: scsi host0: ahci Jan 24 00:55:56.365476 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Jan 24 00:55:56.365542 kernel: scsi host1: ahci Jan 24 00:55:56.366404 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (466) Jan 24 00:55:56.366419 kernel: scsi host2: ahci Jan 24 00:55:56.367720 kernel: scsi host3: ahci Jan 24 00:55:56.364271 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:55:56.488636 kernel: scsi host4: ahci Jan 24 00:55:56.489016 kernel: scsi host5: ahci Jan 24 00:55:56.489245 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:55:56.489264 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:55:56.489280 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:55:56.489296 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:55:56.489312 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:55:56.489326 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:55:56.378502 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:55:56.499817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:56.513700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:55:56.519050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:55:56.523779 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:55:56.547814 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:55:56.551821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:56.568215 disk-uuid[558]: Primary Header is updated. Jan 24 00:55:56.568215 disk-uuid[558]: Secondary Entries is updated. Jan 24 00:55:56.568215 disk-uuid[558]: Secondary Header is updated. Jan 24 00:55:56.574350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:55:56.582391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:56.691600 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:56.691696 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:56.694594 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:56.697592 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:56.700646 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:55:56.700677 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:55:56.704004 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:55:56.704031 kernel: ata3.00: applying bridge limits Jan 24 00:55:56.707624 kernel: ata3.00: configured for UDMA/100 Jan 24 00:55:56.712777 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:55:56.768622 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:55:56.769082 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:55:56.782655 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:55:57.603617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:55:57.604090 disk-uuid[564]: The operation has completed successfully. Jan 24 00:55:57.640953 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:55:57.641161 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:55:57.668894 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:55:57.680719 sh[595]: Success Jan 24 00:55:57.700801 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:55:57.751905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:55:57.771457 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:55:57.777033 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:55:57.798309 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:55:57.798375 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:57.798399 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:55:57.801031 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:55:57.802989 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:55:57.813055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:55:57.814107 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:55:57.830973 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:55:57.837937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:55:57.862608 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:57.862664 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:57.862681 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:55:57.870662 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:55:57.883157 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:55:57.890212 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:57.897826 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:55:57.907902 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:55:57.983437 ignition[698]: Ignition 2.19.0 Jan 24 00:55:57.983474 ignition[698]: Stage: fetch-offline Jan 24 00:55:57.983534 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:57.983603 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:55:57.983807 ignition[698]: parsed url from cmdline: "" Jan 24 00:55:57.983815 ignition[698]: no config URL provided Jan 24 00:55:57.983826 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:55:57.983846 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:55:57.983892 ignition[698]: op(1): [started] loading QEMU firmware config module Jan 24 00:55:57.983903 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:55:57.996859 ignition[698]: op(1): [finished] loading QEMU firmware config module Jan 24 00:55:58.045436 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:55:58.060896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:55:58.095200 systemd-networkd[783]: lo: Link UP Jan 24 00:55:58.095240 systemd-networkd[783]: lo: Gained carrier Jan 24 00:55:58.097933 systemd-networkd[783]: Enumeration completed Jan 24 00:55:58.098329 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:55:58.099062 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:58.099068 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:55:58.100544 systemd-networkd[783]: eth0: Link UP Jan 24 00:55:58.100600 systemd-networkd[783]: eth0: Gained carrier Jan 24 00:55:58.100613 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:58.103932 systemd[1]: Reached target network.target - Network. Jan 24 00:55:58.132692 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:55:58.250328 ignition[698]: parsing config with SHA512: d6cdc0c2f50f19f3dfaba429fb558c2737cfee4efbbb3c4c126c02fc68e6bcc5b4ec116d75b295b31df8a31baef464e9bdc37591e8dacdcf66d833377d235510 Jan 24 00:55:58.373619 unknown[698]: fetched base config from "system" Jan 24 00:55:58.373672 unknown[698]: fetched user config from "qemu" Jan 24 00:55:58.374090 ignition[698]: fetch-offline: fetch-offline passed Jan 24 00:55:58.374225 ignition[698]: Ignition finished successfully Jan 24 00:55:58.384149 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:55:58.389421 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:55:58.408058 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:55:58.507426 ignition[787]: Ignition 2.19.0 Jan 24 00:55:58.507457 ignition[787]: Stage: kargs Jan 24 00:55:58.510238 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:58.510253 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:55:58.514935 ignition[787]: kargs: kargs passed Jan 24 00:55:58.515008 ignition[787]: Ignition finished successfully Jan 24 00:55:58.543353 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:55:58.688907 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:55:58.703289 ignition[795]: Ignition 2.19.0 Jan 24 00:55:58.703297 ignition[795]: Stage: disks Jan 24 00:55:58.703451 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:58.703462 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:55:58.704209 ignition[795]: disks: disks passed Jan 24 00:55:58.704252 ignition[795]: Ignition finished successfully Jan 24 00:55:58.949815 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:55:58.950168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:55:58.954944 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:55:58.960464 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:55:58.966609 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:55:58.975276 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:55:58.993810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:55:59.019138 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:55:59.028038 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:55:59.046707 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:55:59.149664 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:55:59.150330 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:55:59.153470 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:55:59.173695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:55:59.177620 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:55:59.183207 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:55:59.183299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:55:59.183336 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:55:59.192669 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:55:59.200195 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:55:59.215597 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Jan 24 00:55:59.222833 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:59.222893 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:59.222911 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:55:59.234684 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:55:59.236663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:55:59.260152 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:55:59.265320 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:55:59.273095 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:55:59.278227 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:55:59.396708 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:55:59.417888 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:55:59.423804 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:55:59.433544 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:59.428036 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:55:59.461053 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:55:59.470805 ignition[924]: INFO : Ignition 2.19.0 Jan 24 00:55:59.470805 ignition[924]: INFO : Stage: mount Jan 24 00:55:59.475352 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:59.475352 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:55:59.483448 ignition[924]: INFO : mount: mount passed Jan 24 00:55:59.485655 ignition[924]: INFO : Ignition finished successfully Jan 24 00:55:59.490211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:55:59.503724 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:55:59.901927 systemd-networkd[783]: eth0: Gained IPv6LL Jan 24 00:56:00.159919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:56:00.174621 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Jan 24 00:56:00.182176 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:56:00.182237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:56:00.182250 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:56:00.191708 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:56:00.194059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:56:00.233078 ignition[955]: INFO : Ignition 2.19.0 Jan 24 00:56:00.233078 ignition[955]: INFO : Stage: files Jan 24 00:56:00.239920 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:00.239920 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:00.239920 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:56:00.239920 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:56:00.239920 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:56:00.263252 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:56:00.263252 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:56:00.263252 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:56:00.263252 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:56:00.263252 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:56:00.245730 unknown[955]: wrote ssh authorized keys file for user: core Jan 24 00:56:00.299648 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:56:00.446539 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:56:00.446539 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:56:00.456430 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:56:00.658095 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:56:01.353119 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:56:01.353119 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:56:01.367004 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:56:01.417052 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:56:01.417052 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:56:01.417052 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:56:01.417052 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:56:01.417052 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:56:01.417052 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:56:01.417052 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:56:01.417052 ignition[955]: INFO : files: files passed Jan 24 00:56:01.417052 ignition[955]: INFO : Ignition finished successfully Jan 24 00:56:01.403684 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:56:01.425868 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:56:01.433169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:56:01.440076 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:56:01.502399 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:56:01.440230 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:56:01.511533 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:01.511533 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:01.452259 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:56:01.535363 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:56:01.458761 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:56:01.480897 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:56:01.518487 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:56:01.518704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:56:01.527276 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:56:01.535345 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:56:01.539355 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:56:01.540483 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:56:01.566809 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:56:01.585817 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:56:01.599226 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:56:01.603430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:56:01.609894 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:56:01.615323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:56:01.615511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:56:01.623293 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:56:01.630386 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:56:01.637377 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:56:01.645142 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:56:01.652258 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:56:01.659929 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:56:01.667640 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:56:01.675891 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:56:01.683121 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:56:01.689229 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:56:01.693972 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:56:01.694162 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:56:01.700241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:56:01.704988 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:56:01.711753 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:56:01.712160 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:56:01.718744 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:56:01.718914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:56:01.724820 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:56:01.724957 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:56:01.732015 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:56:01.737646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:56:01.741703 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:56:01.745620 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:56:01.751036 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:56:01.757010 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:56:01.757117 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:56:01.762232 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:56:01.762320 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:56:01.767857 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:56:01.768010 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:56:01.818192 ignition[1009]: INFO : Ignition 2.19.0 Jan 24 00:56:01.818192 ignition[1009]: INFO : Stage: umount Jan 24 00:56:01.818192 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:56:01.818192 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:56:01.818192 ignition[1009]: INFO : umount: umount passed Jan 24 00:56:01.818192 ignition[1009]: INFO : Ignition finished successfully Jan 24 00:56:01.774175 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:56:01.774348 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:56:01.792993 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:56:01.797520 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:56:01.797728 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:56:01.807841 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:56:01.811605 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:56:01.811759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:56:01.821921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:56:01.822119 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:56:01.830935 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:56:01.831108 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:56:01.838408 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:56:01.838740 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:56:01.846522 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:56:01.847927 systemd[1]: Stopped target network.target - Network. Jan 24 00:56:01.854475 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:56:01.854647 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:56:01.858169 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:56:01.858242 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:56:01.863716 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:56:01.863831 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:56:01.869529 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:56:01.869653 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:56:01.876043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:56:01.883723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:56:01.889653 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 24 00:56:01.893097 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:56:01.893297 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:56:01.900311 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:56:01.900507 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:56:01.908508 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:56:01.908625 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:56:01.931936 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:56:01.934702 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:56:01.934828 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:56:01.940511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:56:01.940619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:56:01.945720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:56:01.945812 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:56:01.945947 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:56:01.945994 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:56:01.946514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:56:01.947665 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:56:01.947830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:56:01.950520 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:56:01.950648 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:56:01.961826 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:56:01.962010 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:56:01.980342 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:56:01.980628 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:56:01.985739 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:56:01.985835 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:56:01.990764 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:56:01.990854 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:56:01.998273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:56:01.998367 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:56:02.006289 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:56:02.006354 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:56:02.014096 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:56:02.014159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:56:02.037911 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:56:02.138639 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 24 00:56:02.042406 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:56:02.042491 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:56:02.048456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:56:02.048537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:02.055454 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:56:02.055732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:56:02.061857 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:56:02.088977 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:56:02.099759 systemd[1]: Switching root. Jan 24 00:56:02.169917 systemd-journald[193]: Journal stopped Jan 24 00:56:03.552039 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:56:03.552145 kernel: SELinux: policy capability open_perms=1 Jan 24 00:56:03.552168 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:56:03.552187 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:56:03.552211 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:56:03.552229 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:56:03.552246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:56:03.552274 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:56:03.552292 kernel: audit: type=1403 audit(1769216162.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:56:03.552313 systemd[1]: Successfully loaded SELinux policy in 51.311ms. Jan 24 00:56:03.552349 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.094ms. Jan 24 00:56:03.552374 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:56:03.552394 systemd[1]: Detected virtualization kvm. Jan 24 00:56:03.552418 systemd[1]: Detected architecture x86-64. Jan 24 00:56:03.552438 systemd[1]: Detected first boot. Jan 24 00:56:03.552460 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:56:03.552479 zram_generator::config[1052]: No configuration found. Jan 24 00:56:03.552502 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:56:03.552524 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:56:03.552540 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:56:03.552619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:56:03.552650 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:56:03.552670 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:56:03.552689 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:56:03.552707 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:56:03.552763 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:56:03.552832 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:56:03.552863 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:56:03.552882 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:56:03.552905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:56:03.552926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:56:03.552944 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:56:03.552963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:56:03.552982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:56:03.553001 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:56:03.553018 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:56:03.553037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:56:03.553052 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:56:03.553072 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:56:03.553088 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:56:03.553106 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:56:03.553124 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:56:03.553144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:56:03.553163 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:56:03.553181 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:56:03.553200 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:56:03.553224 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:56:03.553242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:56:03.553260 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:56:03.553277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:56:03.553295 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:56:03.553314 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:56:03.553333 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:56:03.553351 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:56:03.553370 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:03.553396 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:56:03.553415 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:56:03.553437 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:56:03.553457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:56:03.553477 systemd[1]: Reached target machines.target - Containers. Jan 24 00:56:03.553509 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:56:03.553527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:56:03.553544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:56:03.553626 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:56:03.553658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:56:03.553683 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:56:03.553707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:56:03.553727 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:56:03.553746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:56:03.553765 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:56:03.553783 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:56:03.553858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:56:03.553888 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:56:03.553908 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:56:03.553928 kernel: fuse: init (API version 7.39) Jan 24 00:56:03.553945 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:56:03.553965 kernel: ACPI: bus type drm_connector registered Jan 24 00:56:03.553984 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:56:03.554004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:56:03.554022 kernel: loop: module loaded Jan 24 00:56:03.554041 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:56:03.554066 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:56:03.554119 systemd-journald[1136]: Collecting audit messages is disabled. Jan 24 00:56:03.554156 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:56:03.554181 systemd-journald[1136]: Journal started Jan 24 00:56:03.554212 systemd-journald[1136]: Runtime Journal (/run/log/journal/bea0ce7cf774405ea1f6007917aecb32) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:56:03.021591 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:56:03.046423 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:56:03.047416 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:56:03.048012 systemd[1]: systemd-journald.service: Consumed 1.519s CPU time. Jan 24 00:56:03.558725 systemd[1]: Stopped verity-setup.service. Jan 24 00:56:03.568674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:03.577153 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:56:03.578932 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:56:03.583094 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:56:03.587265 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:56:03.591625 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:56:03.596222 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:56:03.600934 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:56:03.605405 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:56:03.610747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:56:03.616545 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:56:03.616928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:56:03.622692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:56:03.623024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:56:03.628205 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:56:03.628527 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:56:03.633717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:56:03.634059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:56:03.640157 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:56:03.640470 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:56:03.645499 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:56:03.645852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:56:03.650976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:56:03.655932 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:56:03.661966 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:56:03.677708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:56:03.692670 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:56:03.708835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:56:03.715060 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:56:03.718389 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:56:03.718458 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:56:03.722982 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:56:03.728956 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:56:03.734390 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:56:03.738024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:56:03.740355 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:56:03.746086 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:56:03.750984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:56:03.752731 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:56:03.756769 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:56:03.758930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:56:03.765438 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:56:03.769989 systemd-journald[1136]: Time spent on flushing to /var/log/journal/bea0ce7cf774405ea1f6007917aecb32 is 17.800ms for 940 entries. Jan 24 00:56:03.769989 systemd-journald[1136]: System Journal (/var/log/journal/bea0ce7cf774405ea1f6007917aecb32) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:56:03.798644 systemd-journald[1136]: Received client request to flush runtime journal. Jan 24 00:56:03.785980 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:56:03.802259 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:56:03.814276 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:56:03.821032 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:56:03.829075 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:56:03.834842 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:56:03.839656 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:56:03.843126 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:56:03.850044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:56:03.863995 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:56:03.875923 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:56:03.877213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:56:03.892974 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:56:03.902192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:56:03.909377 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:56:03.917631 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:56:03.922654 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:56:03.925741 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:56:03.957442 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 24 00:56:03.957460 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 24 00:56:03.967679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:56:03.972632 kernel: loop2: detected capacity change from 0 to 229808 Jan 24 00:56:04.009650 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:56:04.036611 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:56:04.060625 kernel: loop5: detected capacity change from 0 to 229808 Jan 24 00:56:04.078785 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:56:04.079472 (sd-merge)[1190]: Merged extensions into '/usr'. Jan 24 00:56:04.085116 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:56:04.085136 systemd[1]: Reloading... Jan 24 00:56:04.165698 zram_generator::config[1214]: No configuration found. Jan 24 00:56:04.207016 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:56:04.344335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:04.390670 systemd[1]: Reloading finished in 304 ms. Jan 24 00:56:04.428399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:56:04.432991 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:56:04.437600 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:56:04.464015 systemd[1]: Starting ensure-sysext.service... Jan 24 00:56:04.468049 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:56:04.473713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:56:04.479594 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:56:04.479625 systemd[1]: Reloading... Jan 24 00:56:04.520007 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:56:04.520345 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:56:04.522521 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:56:04.522965 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 24 00:56:04.523109 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 24 00:56:04.529059 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:56:04.529139 systemd-tmpfiles[1255]: Skipping /boot Jan 24 00:56:04.538452 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Jan 24 00:56:04.545490 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:56:04.546624 systemd-tmpfiles[1255]: Skipping /boot Jan 24 00:56:04.554646 zram_generator::config[1281]: No configuration found. Jan 24 00:56:04.665514 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1306) Jan 24 00:56:04.722764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:04.738609 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:56:04.751612 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:56:04.762080 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:56:04.762391 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:56:04.803376 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:56:04.803473 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:56:04.846763 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:56:04.847526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:56:04.852977 systemd[1]: Reloading finished in 372 ms. Jan 24 00:56:04.913206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:56:04.923626 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:56:04.929711 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:56:04.953008 kernel: kvm_amd: TSC scaling supported Jan 24 00:56:04.953095 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:56:04.953147 kernel: kvm_amd: Nested Paging enabled Jan 24 00:56:04.957536 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:56:04.957627 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:56:04.970819 systemd[1]: Finished ensure-sysext.service. Jan 24 00:56:05.007754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:05.016936 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:56:05.023620 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:56:05.023689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:56:05.026993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:56:05.028326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:56:05.032520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:56:05.040930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:56:05.049953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:56:05.053978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:56:05.055941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:56:05.063975 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:56:05.073077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:56:05.081864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:56:05.089174 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:56:05.097940 augenrules[1379]: No rules Jan 24 00:56:05.106033 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:56:05.111398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:56:05.115139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:56:05.116364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:56:05.120644 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:56:05.124379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:56:05.124773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:56:05.129207 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:56:05.129393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:56:05.134238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:56:05.134541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:56:05.139370 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:56:05.139684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:56:05.143773 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:56:05.148383 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:56:05.163422 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:56:05.178912 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:56:05.182617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:56:05.182715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:56:05.185119 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:56:05.190375 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:56:05.193790 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:56:05.194370 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:56:05.205480 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:56:05.215693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:56:05.314206 systemd-networkd[1372]: lo: Link UP Jan 24 00:56:05.314664 systemd-networkd[1372]: lo: Gained carrier Jan 24 00:56:05.317323 systemd-networkd[1372]: Enumeration completed Jan 24 00:56:05.318473 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:05.318715 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:56:05.320043 systemd-networkd[1372]: eth0: Link UP Jan 24 00:56:05.320133 systemd-networkd[1372]: eth0: Gained carrier Jan 24 00:56:05.320203 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:56:05.332615 systemd-resolved[1374]: Positive Trust Anchors: Jan 24 00:56:05.332662 systemd-resolved[1374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:56:05.332710 systemd-resolved[1374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:56:05.338190 systemd-resolved[1374]: Defaulting to hostname 'linux'. Jan 24 00:56:05.342657 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:56:05.343697 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 24 00:56:06.181932 systemd-resolved[1374]: Clock change detected. Flushing caches. Jan 24 00:56:06.182010 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:56:06.182083 systemd-timesyncd[1377]: Initial clock synchronization to Sat 2026-01-24 00:56:06.181846 UTC. Jan 24 00:56:06.202969 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:56:06.204335 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:56:06.205329 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:56:06.206592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:56:06.207878 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:56:06.209283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:56:06.210083 systemd[1]: Reached target network.target - Network. Jan 24 00:56:06.211029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:56:06.211975 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:56:06.244996 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:56:06.249977 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:56:06.252908 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:56:06.253593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:56:06.257869 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:56:06.260969 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:56:06.264708 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:56:06.268643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:56:06.272507 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:56:06.276073 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:56:06.280059 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:56:06.280124 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:56:06.282849 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:56:06.286919 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:56:06.293114 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:56:06.305805 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:56:06.311249 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:56:06.316723 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:56:06.321723 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:56:06.325048 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:56:06.329023 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:56:06.329094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:56:06.339656 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:56:06.345139 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:56:06.350005 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:56:06.354727 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:56:06.357541 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:56:06.359603 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:56:06.364644 jq[1421]: false Jan 24 00:56:06.366590 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:56:06.371646 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:56:06.379620 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:56:06.386316 extend-filesystems[1422]: Found loop3 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found loop4 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found loop5 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found sr0 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda1 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda2 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda3 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found usr Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda4 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda6 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda7 Jan 24 00:56:06.389624 extend-filesystems[1422]: Found vda9 Jan 24 00:56:06.389624 extend-filesystems[1422]: Checking size of /dev/vda9 Jan 24 00:56:06.468959 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:56:06.468991 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1308) Jan 24 00:56:06.469006 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:56:06.386960 dbus-daemon[1420]: [system] SELinux support is enabled Jan 24 00:56:06.389229 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:56:06.492184 extend-filesystems[1422]: Resized partition /dev/vda9 Jan 24 00:56:06.403008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:56:06.495204 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:56:06.404112 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:56:06.496208 update_engine[1438]: I20260124 00:56:06.479363 1438 main.cc:92] Flatcar Update Engine starting Jan 24 00:56:06.496208 update_engine[1438]: I20260124 00:56:06.483767 1438 update_check_scheduler.cc:74] Next update check in 3m36s Jan 24 00:56:06.416808 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:56:06.496615 jq[1443]: true Jan 24 00:56:06.446879 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:56:06.454314 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:56:06.479147 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:56:06.498295 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:56:06.498295 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:56:06.498295 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:56:06.479583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:56:06.522068 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Jan 24 00:56:06.480205 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:56:06.480616 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:56:06.489131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:56:06.489527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:56:06.501485 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:56:06.501812 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:56:06.521248 systemd-logind[1429]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:56:06.521275 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:56:06.522256 systemd-logind[1429]: New seat seat0. Jan 24 00:56:06.525897 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:56:06.528206 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:56:06.528412 jq[1448]: true Jan 24 00:56:06.537733 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:56:06.553334 tar[1446]: linux-amd64/LICENSE Jan 24 00:56:06.552248 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:56:06.553842 tar[1446]: linux-amd64/helm Jan 24 00:56:06.558468 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:56:06.566385 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:56:06.585070 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:56:06.588630 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:56:06.588863 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:56:06.593739 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:56:06.593905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:56:06.594734 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:56:06.608834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:56:06.616255 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:56:06.623270 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:56:06.632826 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:56:06.633131 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:56:06.644981 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:56:06.651896 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:56:06.662836 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:56:06.676125 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:56:06.689181 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:56:06.693726 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:56:06.782611 containerd[1450]: time="2026-01-24T00:56:06.782331215Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:56:06.802114 containerd[1450]: time="2026-01-24T00:56:06.802054652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.804619 containerd[1450]: time="2026-01-24T00:56:06.804536496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:06.804619 containerd[1450]: time="2026-01-24T00:56:06.804588654Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:56:06.804619 containerd[1450]: time="2026-01-24T00:56:06.804607709Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:56:06.804931 containerd[1450]: time="2026-01-24T00:56:06.804888975Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:56:06.804972 containerd[1450]: time="2026-01-24T00:56:06.804933047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805041 containerd[1450]: time="2026-01-24T00:56:06.805003608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805041 containerd[1450]: time="2026-01-24T00:56:06.805032402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805275 containerd[1450]: time="2026-01-24T00:56:06.805211286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805275 containerd[1450]: time="2026-01-24T00:56:06.805251952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805275 containerd[1450]: time="2026-01-24T00:56:06.805265227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805275 containerd[1450]: time="2026-01-24T00:56:06.805273963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805409 containerd[1450]: time="2026-01-24T00:56:06.805364842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805767 containerd[1450]: time="2026-01-24T00:56:06.805719034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805894 containerd[1450]: time="2026-01-24T00:56:06.805856661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:56:06.805894 containerd[1450]: time="2026-01-24T00:56:06.805885294Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:56:06.806020 containerd[1450]: time="2026-01-24T00:56:06.805983557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:56:06.806087 containerd[1450]: time="2026-01-24T00:56:06.806055411Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:56:06.812288 containerd[1450]: time="2026-01-24T00:56:06.812222710Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:56:06.812357 containerd[1450]: time="2026-01-24T00:56:06.812318048Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:56:06.812498 containerd[1450]: time="2026-01-24T00:56:06.812380354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:56:06.812498 containerd[1450]: time="2026-01-24T00:56:06.812482585Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:56:06.812543 containerd[1450]: time="2026-01-24T00:56:06.812505347Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:56:06.812732 containerd[1450]: time="2026-01-24T00:56:06.812668171Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:56:06.813037 containerd[1450]: time="2026-01-24T00:56:06.812986035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:56:06.813219 containerd[1450]: time="2026-01-24T00:56:06.813148258Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:56:06.813219 containerd[1450]: time="2026-01-24T00:56:06.813203981Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:56:06.813312 containerd[1450]: time="2026-01-24T00:56:06.813226013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:56:06.813312 containerd[1450]: time="2026-01-24T00:56:06.813246882Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813312 containerd[1450]: time="2026-01-24T00:56:06.813267971Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813312 containerd[1450]: time="2026-01-24T00:56:06.813286636Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813312 containerd[1450]: time="2026-01-24T00:56:06.813309628Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813412 containerd[1450]: time="2026-01-24T00:56:06.813329986Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813412 containerd[1450]: time="2026-01-24T00:56:06.813347039Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813412 containerd[1450]: time="2026-01-24T00:56:06.813359151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813412 containerd[1450]: time="2026-01-24T00:56:06.813370472Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:56:06.813412 containerd[1450]: time="2026-01-24T00:56:06.813388586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813412 containerd[1450]: time="2026-01-24T00:56:06.813402542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813589 containerd[1450]: time="2026-01-24T00:56:06.813520202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813589 containerd[1450]: time="2026-01-24T00:56:06.813546591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813589 containerd[1450]: time="2026-01-24T00:56:06.813569324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813592286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813609498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813626049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813642650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813662046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813672847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813717 containerd[1450]: time="2026-01-24T00:56:06.813713723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813857 containerd[1450]: time="2026-01-24T00:56:06.813735514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813857 containerd[1450]: time="2026-01-24T00:56:06.813757355Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:56:06.813857 containerd[1450]: time="2026-01-24T00:56:06.813779626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813857 containerd[1450]: time="2026-01-24T00:56:06.813799794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.813857 containerd[1450]: time="2026-01-24T00:56:06.813827095Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:56:06.813938 containerd[1450]: time="2026-01-24T00:56:06.813893489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:56:06.813938 containerd[1450]: time="2026-01-24T00:56:06.813921290Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:56:06.813974 containerd[1450]: time="2026-01-24T00:56:06.813936890Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:56:06.813974 containerd[1450]: time="2026-01-24T00:56:06.813955605Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:56:06.814013 containerd[1450]: time="2026-01-24T00:56:06.813969891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.814013 containerd[1450]: time="2026-01-24T00:56:06.813993655Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:56:06.814013 containerd[1450]: time="2026-01-24T00:56:06.814006088Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:56:06.814065 containerd[1450]: time="2026-01-24T00:56:06.814019984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:56:06.814572 containerd[1450]: time="2026-01-24T00:56:06.814368104Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:56:06.814572 containerd[1450]: time="2026-01-24T00:56:06.814543251Z" level=info msg="Connect containerd service" Jan 24 00:56:06.814836 containerd[1450]: time="2026-01-24T00:56:06.814600779Z" level=info msg="using legacy CRI server" Jan 24 00:56:06.814836 containerd[1450]: time="2026-01-24T00:56:06.814614374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:56:06.814836 containerd[1450]: time="2026-01-24T00:56:06.814767910Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:56:06.815704 containerd[1450]: time="2026-01-24T00:56:06.815625922Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:56:06.816003 containerd[1450]: time="2026-01-24T00:56:06.815941580Z" level=info msg="Start subscribing containerd event" Jan 24 00:56:06.816043 containerd[1450]: time="2026-01-24T00:56:06.816001201Z" level=info msg="Start recovering state" Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816115986Z" level=info msg="Start event monitor" Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816119753Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816175176Z" level=info msg="Start snapshots syncer" Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816188831Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816203368Z" level=info msg="Start streaming server" Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816241621Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:56:06.816318 containerd[1450]: time="2026-01-24T00:56:06.816306702Z" level=info msg="containerd successfully booted in 0.035150s" Jan 24 00:56:06.816511 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:56:07.007177 tar[1446]: linux-amd64/README.md Jan 24 00:56:07.028177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:56:08.098766 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 24 00:56:08.102472 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:56:08.106404 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:56:08.119758 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:56:08.124069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:08.130071 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:56:08.160122 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:56:08.165177 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:56:08.165413 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:56:08.168861 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:56:08.954620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:08.959384 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:56:08.963290 systemd[1]: Startup finished in 1.269s (kernel) + 7.560s (initrd) + 5.851s (userspace) = 14.681s. Jan 24 00:56:09.016275 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:56:09.253970 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:56:09.268079 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:51954.service - OpenSSH per-connection server daemon (10.0.0.1:51954). Jan 24 00:56:09.328363 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 51954 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:09.330556 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:09.344879 systemd-logind[1429]: New session 1 of user core. Jan 24 00:56:09.346866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:56:09.358002 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:56:09.374125 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:56:09.384945 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:56:09.388409 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:56:09.544667 systemd[1546]: Queued start job for default target default.target. Jan 24 00:56:09.557600 systemd[1546]: Created slice app.slice - User Application Slice. Jan 24 00:56:09.557662 systemd[1546]: Reached target paths.target - Paths. Jan 24 00:56:09.557686 systemd[1546]: Reached target timers.target - Timers. Jan 24 00:56:09.560994 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:56:09.577111 kubelet[1531]: E0124 00:56:09.577068 1531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:56:09.577340 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:56:09.577544 systemd[1546]: Reached target sockets.target - Sockets. Jan 24 00:56:09.577561 systemd[1546]: Reached target basic.target - Basic System. Jan 24 00:56:09.577608 systemd[1546]: Reached target default.target - Main User Target. Jan 24 00:56:09.577645 systemd[1546]: Startup finished in 179ms. Jan 24 00:56:09.577880 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:56:09.596631 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:56:09.596962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:56:09.597165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:56:09.597526 systemd[1]: kubelet.service: Consumed 1.104s CPU time. Jan 24 00:56:09.667259 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). Jan 24 00:56:09.707754 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 51970 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:09.711349 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:09.717798 systemd-logind[1429]: New session 2 of user core. Jan 24 00:56:09.731830 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:56:09.796949 sshd[1560]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:09.808744 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:51970.service: Deactivated successfully. Jan 24 00:56:09.810561 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:56:09.811998 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:56:09.813642 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). Jan 24 00:56:09.814876 systemd-logind[1429]: Removed session 2. Jan 24 00:56:09.875237 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:09.877522 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:09.883635 systemd-logind[1429]: New session 3 of user core. Jan 24 00:56:09.894797 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:56:09.951612 sshd[1567]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:09.968189 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:51972.service: Deactivated successfully. Jan 24 00:56:09.970309 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:56:09.973111 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:56:09.987002 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Jan 24 00:56:09.988072 systemd-logind[1429]: Removed session 3. Jan 24 00:56:10.024504 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:10.026372 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:10.042383 systemd-logind[1429]: New session 4 of user core. Jan 24 00:56:10.049759 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:56:10.109072 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:10.122006 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:51984.service: Deactivated successfully. Jan 24 00:56:10.124096 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:56:10.125997 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:56:10.136980 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:51990.service - OpenSSH per-connection server daemon (10.0.0.1:51990). Jan 24 00:56:10.138391 systemd-logind[1429]: Removed session 4. Jan 24 00:56:10.169538 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 51990 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:10.171412 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:10.177223 systemd-logind[1429]: New session 5 of user core. Jan 24 00:56:10.187703 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:56:10.258318 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:56:10.259056 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:10.283854 sudo[1585]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:10.286321 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:10.293328 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:51990.service: Deactivated successfully. Jan 24 00:56:10.295138 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:56:10.296584 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:56:10.312009 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). Jan 24 00:56:10.313706 systemd-logind[1429]: Removed session 5. Jan 24 00:56:10.343889 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:10.345923 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:10.352009 systemd-logind[1429]: New session 6 of user core. Jan 24 00:56:10.365619 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:56:10.423384 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:56:10.423928 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:10.429255 sudo[1594]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:10.436604 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:56:10.437009 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:10.464002 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:56:10.466320 auditctl[1597]: No rules Jan 24 00:56:10.467056 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:56:10.467418 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:56:10.471402 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:56:10.513638 augenrules[1615]: No rules Jan 24 00:56:10.515359 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:56:10.516784 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:10.519062 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:10.537649 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:52002.service: Deactivated successfully. Jan 24 00:56:10.540127 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:56:10.542311 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:56:10.553976 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:52004.service - OpenSSH per-connection server daemon (10.0.0.1:52004). Jan 24 00:56:10.555556 systemd-logind[1429]: Removed session 6. Jan 24 00:56:10.588330 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 52004 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:56:10.590391 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:56:10.596934 systemd-logind[1429]: New session 7 of user core. Jan 24 00:56:10.608815 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:56:10.669090 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:56:10.669519 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:56:10.985817 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:56:10.985976 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:56:11.327206 dockerd[1644]: time="2026-01-24T00:56:11.326997073Z" level=info msg="Starting up" Jan 24 00:56:11.695590 dockerd[1644]: time="2026-01-24T00:56:11.695306317Z" level=info msg="Loading containers: start." Jan 24 00:56:11.845526 kernel: Initializing XFRM netlink socket Jan 24 00:56:11.958710 systemd-networkd[1372]: docker0: Link UP Jan 24 00:56:11.986033 dockerd[1644]: time="2026-01-24T00:56:11.985961766Z" level=info msg="Loading containers: done." Jan 24 00:56:12.006337 dockerd[1644]: time="2026-01-24T00:56:12.006242982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:56:12.006300 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3546143541-merged.mount: Deactivated successfully. Jan 24 00:56:12.006971 dockerd[1644]: time="2026-01-24T00:56:12.006405225Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:56:12.006971 dockerd[1644]: time="2026-01-24T00:56:12.006575262Z" level=info msg="Daemon has completed initialization" Jan 24 00:56:12.052827 dockerd[1644]: time="2026-01-24T00:56:12.052687893Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:56:12.053017 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:56:12.890803 containerd[1450]: time="2026-01-24T00:56:12.890635752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 00:56:13.424382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312228675.mount: Deactivated successfully. Jan 24 00:56:14.423178 containerd[1450]: time="2026-01-24T00:56:14.423106074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:14.424074 containerd[1450]: time="2026-01-24T00:56:14.424030045Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 24 00:56:14.425363 containerd[1450]: time="2026-01-24T00:56:14.425321719Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:14.428510 containerd[1450]: time="2026-01-24T00:56:14.428417598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:14.430616 containerd[1450]: time="2026-01-24T00:56:14.430557371Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.539863981s" Jan 24 00:56:14.430681 containerd[1450]: time="2026-01-24T00:56:14.430616893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 00:56:14.431289 containerd[1450]: time="2026-01-24T00:56:14.431254710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 00:56:15.555298 containerd[1450]: time="2026-01-24T00:56:15.555144899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:15.556268 containerd[1450]: time="2026-01-24T00:56:15.556208664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 24 00:56:15.557659 containerd[1450]: time="2026-01-24T00:56:15.557569431Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:15.562503 containerd[1450]: time="2026-01-24T00:56:15.562366783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:15.563487 containerd[1450]: time="2026-01-24T00:56:15.563397196Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.132105678s" Jan 24 00:56:15.563487 containerd[1450]: time="2026-01-24T00:56:15.563478799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 00:56:15.564159 containerd[1450]: time="2026-01-24T00:56:15.564129373Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 00:56:16.544761 containerd[1450]: time="2026-01-24T00:56:16.544639768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:16.545700 containerd[1450]: time="2026-01-24T00:56:16.545652731Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 24 00:56:16.546997 containerd[1450]: time="2026-01-24T00:56:16.546949248Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:16.549979 containerd[1450]: time="2026-01-24T00:56:16.549925353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:16.551260 containerd[1450]: time="2026-01-24T00:56:16.551206473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 986.961764ms" Jan 24 00:56:16.551260 containerd[1450]: time="2026-01-24T00:56:16.551248541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 00:56:16.552103 containerd[1450]: time="2026-01-24T00:56:16.552047352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:56:17.529071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531702577.mount: Deactivated successfully. Jan 24 00:56:17.976687 containerd[1450]: time="2026-01-24T00:56:17.976415769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:17.977579 containerd[1450]: time="2026-01-24T00:56:17.977516215Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 24 00:56:17.978851 containerd[1450]: time="2026-01-24T00:56:17.978763944Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:17.981083 containerd[1450]: time="2026-01-24T00:56:17.981038479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:17.982052 containerd[1450]: time="2026-01-24T00:56:17.982011275Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.429921684s" Jan 24 00:56:17.982105 containerd[1450]: time="2026-01-24T00:56:17.982059365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:56:17.982634 containerd[1450]: time="2026-01-24T00:56:17.982560796Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 00:56:18.455081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737326472.mount: Deactivated successfully. Jan 24 00:56:19.169040 containerd[1450]: time="2026-01-24T00:56:19.168954053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:19.169987 containerd[1450]: time="2026-01-24T00:56:19.169948334Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 24 00:56:19.171045 containerd[1450]: time="2026-01-24T00:56:19.170999691Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:19.174090 containerd[1450]: time="2026-01-24T00:56:19.174052099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:19.175120 containerd[1450]: time="2026-01-24T00:56:19.175028982Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.19241577s" Jan 24 00:56:19.175120 containerd[1450]: time="2026-01-24T00:56:19.175080628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 00:56:19.175981 containerd[1450]: time="2026-01-24T00:56:19.175666477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:56:19.563342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195588191.mount: Deactivated successfully. Jan 24 00:56:19.570822 containerd[1450]: time="2026-01-24T00:56:19.570748606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:19.571824 containerd[1450]: time="2026-01-24T00:56:19.571719915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:56:19.572754 containerd[1450]: time="2026-01-24T00:56:19.572707037Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:19.575118 containerd[1450]: time="2026-01-24T00:56:19.575056573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:19.575890 containerd[1450]: time="2026-01-24T00:56:19.575833433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 400.136258ms" Jan 24 00:56:19.575890 containerd[1450]: time="2026-01-24T00:56:19.575872966Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:56:19.576369 containerd[1450]: time="2026-01-24T00:56:19.576321348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 00:56:19.847710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:56:19.857753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:20.032251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:20.037198 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:56:20.039760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530473604.mount: Deactivated successfully. Jan 24 00:56:20.080247 kubelet[1935]: E0124 00:56:20.080124 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:56:20.087207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:56:20.087417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:56:21.454473 containerd[1450]: time="2026-01-24T00:56:21.454311772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:21.455346 containerd[1450]: time="2026-01-24T00:56:21.455310684Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 24 00:56:21.456887 containerd[1450]: time="2026-01-24T00:56:21.456832998Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:21.460184 containerd[1450]: time="2026-01-24T00:56:21.460124191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:21.462379 containerd[1450]: time="2026-01-24T00:56:21.462295837Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.885937119s" Jan 24 00:56:21.462379 containerd[1450]: time="2026-01-24T00:56:21.462357782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 00:56:24.426737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:24.440893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:24.466996 systemd[1]: Reloading requested from client PID 2025 ('systemctl') (unit session-7.scope)... Jan 24 00:56:24.467032 systemd[1]: Reloading... Jan 24 00:56:24.540520 zram_generator::config[2062]: No configuration found. Jan 24 00:56:24.670911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:24.746540 systemd[1]: Reloading finished in 279 ms. Jan 24 00:56:24.797061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:24.801073 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:24.802882 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:56:24.803212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:24.810825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:24.967958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:24.986785 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:56:25.031176 kubelet[2114]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:25.031176 kubelet[2114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:56:25.031176 kubelet[2114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:25.031176 kubelet[2114]: I0124 00:56:25.031116 2114 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:56:25.536169 kubelet[2114]: I0124 00:56:25.536108 2114 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:56:25.536169 kubelet[2114]: I0124 00:56:25.536155 2114 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:56:25.536579 kubelet[2114]: I0124 00:56:25.536516 2114 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:56:25.558777 kubelet[2114]: E0124 00:56:25.558685 2114 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:56:25.559650 kubelet[2114]: I0124 00:56:25.559597 2114 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:56:25.565359 kubelet[2114]: E0124 00:56:25.565240 2114 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:56:25.565359 kubelet[2114]: I0124 00:56:25.565272 2114 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:56:25.572240 kubelet[2114]: I0124 00:56:25.572187 2114 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:56:25.572667 kubelet[2114]: I0124 00:56:25.572600 2114 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:56:25.572796 kubelet[2114]: I0124 00:56:25.572637 2114 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:56:25.572796 kubelet[2114]: I0124 00:56:25.572779 2114 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:56:25.572796 kubelet[2114]: I0124 00:56:25.572788 2114 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:56:25.573551 kubelet[2114]: I0124 00:56:25.573508 2114 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:25.575659 kubelet[2114]: I0124 00:56:25.575594 2114 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:56:25.575704 kubelet[2114]: I0124 00:56:25.575661 2114 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:56:25.575704 kubelet[2114]: I0124 00:56:25.575695 2114 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:56:25.575756 kubelet[2114]: I0124 00:56:25.575710 2114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:56:25.585891 kubelet[2114]: I0124 00:56:25.585787 2114 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:56:25.585953 kubelet[2114]: E0124 00:56:25.585887 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:56:25.586021 kubelet[2114]: E0124 00:56:25.585986 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:56:25.586669 kubelet[2114]: I0124 00:56:25.586642 2114 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:56:25.587218 kubelet[2114]: W0124 00:56:25.587163 2114 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:56:25.589959 kubelet[2114]: I0124 00:56:25.589901 2114 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:56:25.589995 kubelet[2114]: I0124 00:56:25.589971 2114 server.go:1289] "Started kubelet" Jan 24 00:56:25.590155 kubelet[2114]: I0124 00:56:25.590060 2114 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:56:25.591740 kubelet[2114]: I0124 00:56:25.591556 2114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:56:25.591740 kubelet[2114]: I0124 00:56:25.591570 2114 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:56:25.591740 kubelet[2114]: I0124 00:56:25.591592 2114 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:56:25.591740 kubelet[2114]: I0124 00:56:25.591623 2114 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:56:25.593204 kubelet[2114]: I0124 00:56:25.591554 2114 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:56:25.594030 kubelet[2114]: I0124 00:56:25.593972 2114 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:56:25.594414 kubelet[2114]: I0124 00:56:25.594369 2114 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:56:25.594526 kubelet[2114]: I0124 00:56:25.594503 2114 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:56:25.594765 kubelet[2114]: E0124 00:56:25.594739 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:56:25.595373 kubelet[2114]: E0124 00:56:25.595282 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:25.595543 kubelet[2114]: E0124 00:56:25.595421 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="200ms" Jan 24 00:56:25.595706 kubelet[2114]: I0124 00:56:25.595670 2114 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:56:25.597029 kubelet[2114]: E0124 00:56:25.595605 2114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84bf40f5aa33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:56:25.589934643 +0000 UTC m=+0.598846551,LastTimestamp:2026-01-24 00:56:25.589934643 +0000 UTC m=+0.598846551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:56:25.597974 kubelet[2114]: E0124 00:56:25.597909 2114 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:56:25.598541 kubelet[2114]: I0124 00:56:25.598490 2114 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:56:25.598541 kubelet[2114]: I0124 00:56:25.598519 2114 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:56:25.617004 kubelet[2114]: I0124 00:56:25.616664 2114 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:56:25.617004 kubelet[2114]: I0124 00:56:25.616684 2114 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:56:25.617004 kubelet[2114]: I0124 00:56:25.616702 2114 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:25.617267 kubelet[2114]: I0124 00:56:25.617026 2114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:56:25.619737 kubelet[2114]: I0124 00:56:25.619695 2114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:56:25.619802 kubelet[2114]: I0124 00:56:25.619741 2114 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:56:25.619802 kubelet[2114]: I0124 00:56:25.619765 2114 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:56:25.619802 kubelet[2114]: I0124 00:56:25.619775 2114 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:56:25.619930 kubelet[2114]: E0124 00:56:25.619812 2114 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:56:25.620898 kubelet[2114]: E0124 00:56:25.620809 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:56:25.696513 kubelet[2114]: E0124 00:56:25.696316 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:25.712651 kubelet[2114]: I0124 00:56:25.712566 2114 policy_none.go:49] "None policy: Start" Jan 24 00:56:25.712651 kubelet[2114]: I0124 00:56:25.712612 2114 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:56:25.712651 kubelet[2114]: I0124 00:56:25.712626 2114 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:56:25.720417 kubelet[2114]: E0124 00:56:25.720340 2114 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:56:25.720736 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:56:25.732663 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:56:25.745336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:56:25.746806 kubelet[2114]: E0124 00:56:25.746755 2114 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:56:25.747032 kubelet[2114]: I0124 00:56:25.747003 2114 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:56:25.747032 kubelet[2114]: I0124 00:56:25.747026 2114 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:56:25.747405 kubelet[2114]: I0124 00:56:25.747218 2114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:56:25.748231 kubelet[2114]: E0124 00:56:25.748204 2114 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:56:25.748345 kubelet[2114]: E0124 00:56:25.748235 2114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:56:25.797383 kubelet[2114]: E0124 00:56:25.797135 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="400ms" Jan 24 00:56:25.848963 kubelet[2114]: I0124 00:56:25.848899 2114 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:56:25.849275 kubelet[2114]: E0124 00:56:25.849233 2114 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Jan 24 00:56:25.933470 systemd[1]: Created slice kubepods-burstable-podacfa7d644ae53df8ad265502aa1cf937.slice - libcontainer container kubepods-burstable-podacfa7d644ae53df8ad265502aa1cf937.slice. Jan 24 00:56:25.951134 kubelet[2114]: E0124 00:56:25.951063 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:25.954617 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 24 00:56:25.956932 kubelet[2114]: E0124 00:56:25.956885 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:25.958933 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 24 00:56:25.960728 kubelet[2114]: E0124 00:56:25.960665 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:25.996664 kubelet[2114]: I0124 00:56:25.996588 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:25.996664 kubelet[2114]: I0124 00:56:25.996634 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:25.996664 kubelet[2114]: I0124 00:56:25.996654 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:25.996664 kubelet[2114]: I0124 00:56:25.996668 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:25.996892 kubelet[2114]: I0124 00:56:25.996682 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acfa7d644ae53df8ad265502aa1cf937-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"acfa7d644ae53df8ad265502aa1cf937\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:25.996892 kubelet[2114]: I0124 00:56:25.996696 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:25.996892 kubelet[2114]: I0124 00:56:25.996711 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:25.996892 kubelet[2114]: I0124 00:56:25.996726 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acfa7d644ae53df8ad265502aa1cf937-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"acfa7d644ae53df8ad265502aa1cf937\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:25.996892 kubelet[2114]: I0124 00:56:25.996747 2114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acfa7d644ae53df8ad265502aa1cf937-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"acfa7d644ae53df8ad265502aa1cf937\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:26.051751 kubelet[2114]: I0124 00:56:26.051611 2114 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:56:26.052208 kubelet[2114]: E0124 00:56:26.052048 2114 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Jan 24 00:56:26.198834 kubelet[2114]: E0124 00:56:26.198717 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="800ms" Jan 24 00:56:26.252220 kubelet[2114]: E0124 00:56:26.252123 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:26.253257 containerd[1450]: time="2026-01-24T00:56:26.252962916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:acfa7d644ae53df8ad265502aa1cf937,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:26.257353 kubelet[2114]: E0124 00:56:26.257299 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:26.258040 containerd[1450]: time="2026-01-24T00:56:26.257983804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:26.261970 kubelet[2114]: E0124 00:56:26.261883 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:26.262332 containerd[1450]: time="2026-01-24T00:56:26.262273210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:26.454209 kubelet[2114]: I0124 00:56:26.454109 2114 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:56:26.454674 kubelet[2114]: E0124 00:56:26.454595 2114 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Jan 24 00:56:26.511470 kubelet[2114]: E0124 00:56:26.511324 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:56:26.692676 kubelet[2114]: E0124 00:56:26.692585 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:56:26.760197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120316820.mount: Deactivated successfully. Jan 24 00:56:26.769545 containerd[1450]: time="2026-01-24T00:56:26.769475282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:26.773032 containerd[1450]: time="2026-01-24T00:56:26.772898231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:56:26.774070 containerd[1450]: time="2026-01-24T00:56:26.773999827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:26.775182 containerd[1450]: time="2026-01-24T00:56:26.775068968Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:26.776413 containerd[1450]: time="2026-01-24T00:56:26.776348095Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:26.777495 containerd[1450]: time="2026-01-24T00:56:26.777340457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:56:26.778609 containerd[1450]: time="2026-01-24T00:56:26.778550916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:56:26.780329 containerd[1450]: time="2026-01-24T00:56:26.780266148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:26.781136 containerd[1450]: time="2026-01-24T00:56:26.781070209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.016083ms" Jan 24 00:56:26.782823 containerd[1450]: time="2026-01-24T00:56:26.782765088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.685655ms" Jan 24 00:56:26.789481 containerd[1450]: time="2026-01-24T00:56:26.788120586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.784818ms" Jan 24 00:56:26.823805 kubelet[2114]: E0124 00:56:26.823707 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:56:26.846391 kubelet[2114]: E0124 00:56:26.846283 2114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:56:26.908826 containerd[1450]: time="2026-01-24T00:56:26.908095085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:26.908826 containerd[1450]: time="2026-01-24T00:56:26.908140981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:26.908826 containerd[1450]: time="2026-01-24T00:56:26.908151520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:26.908826 containerd[1450]: time="2026-01-24T00:56:26.908245475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910218368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910260096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910273080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910343452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910159609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910221214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910235300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:26.910492 containerd[1450]: time="2026-01-24T00:56:26.910326750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:26.940642 systemd[1]: Started cri-containerd-08ff91ff0a16a52ecc7f834b1af2aac7a6df67bb288296ff265a487d21262d74.scope - libcontainer container 08ff91ff0a16a52ecc7f834b1af2aac7a6df67bb288296ff265a487d21262d74. Jan 24 00:56:26.942173 systemd[1]: Started cri-containerd-eccdac138ab4a5a01b79ef59f922121c4670e0a298040a3a35a52fa023d8a026.scope - libcontainer container eccdac138ab4a5a01b79ef59f922121c4670e0a298040a3a35a52fa023d8a026. Jan 24 00:56:26.946528 systemd[1]: Started cri-containerd-4d1ee1b1308f434bfdd5be49d03651ea75c3e932420f51141d0186b3e540937d.scope - libcontainer container 4d1ee1b1308f434bfdd5be49d03651ea75c3e932420f51141d0186b3e540937d. Jan 24 00:56:26.992278 containerd[1450]: time="2026-01-24T00:56:26.992186113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"eccdac138ab4a5a01b79ef59f922121c4670e0a298040a3a35a52fa023d8a026\"" Jan 24 00:56:26.993802 kubelet[2114]: E0124 00:56:26.993577 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:27.000063 kubelet[2114]: E0124 00:56:26.999992 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="1.6s" Jan 24 00:56:27.002543 containerd[1450]: time="2026-01-24T00:56:27.002387772Z" level=info msg="CreateContainer within sandbox \"eccdac138ab4a5a01b79ef59f922121c4670e0a298040a3a35a52fa023d8a026\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:56:27.003705 containerd[1450]: time="2026-01-24T00:56:27.003605493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:acfa7d644ae53df8ad265502aa1cf937,Namespace:kube-system,Attempt:0,} returns sandbox id \"08ff91ff0a16a52ecc7f834b1af2aac7a6df67bb288296ff265a487d21262d74\"" Jan 24 00:56:27.004641 kubelet[2114]: E0124 00:56:27.004610 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:27.008977 containerd[1450]: time="2026-01-24T00:56:27.008950840Z" level=info msg="CreateContainer within sandbox \"08ff91ff0a16a52ecc7f834b1af2aac7a6df67bb288296ff265a487d21262d74\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:56:27.011273 containerd[1450]: time="2026-01-24T00:56:27.011176254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1ee1b1308f434bfdd5be49d03651ea75c3e932420f51141d0186b3e540937d\"" Jan 24 00:56:27.013164 kubelet[2114]: E0124 00:56:27.013025 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:27.017663 containerd[1450]: time="2026-01-24T00:56:27.017639047Z" level=info msg="CreateContainer within sandbox \"4d1ee1b1308f434bfdd5be49d03651ea75c3e932420f51141d0186b3e540937d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:56:27.024535 containerd[1450]: time="2026-01-24T00:56:27.024485847Z" level=info msg="CreateContainer within sandbox \"eccdac138ab4a5a01b79ef59f922121c4670e0a298040a3a35a52fa023d8a026\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14f0f8066fa0dcc1522585efaf9fb632a8a2f288379c99b1ce3262431955203f\"" Jan 24 00:56:27.025326 containerd[1450]: time="2026-01-24T00:56:27.025289031Z" level=info msg="StartContainer for \"14f0f8066fa0dcc1522585efaf9fb632a8a2f288379c99b1ce3262431955203f\"" Jan 24 00:56:27.033682 containerd[1450]: time="2026-01-24T00:56:27.033569384Z" level=info msg="CreateContainer within sandbox \"08ff91ff0a16a52ecc7f834b1af2aac7a6df67bb288296ff265a487d21262d74\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"497a7187e81f9c5b3667242e596358eb1ae2554b822dbba16b41fc994121a3c1\"" Jan 24 00:56:27.035224 containerd[1450]: time="2026-01-24T00:56:27.034116525Z" level=info msg="StartContainer for \"497a7187e81f9c5b3667242e596358eb1ae2554b822dbba16b41fc994121a3c1\"" Jan 24 00:56:27.044268 containerd[1450]: time="2026-01-24T00:56:27.044191453Z" level=info msg="CreateContainer within sandbox \"4d1ee1b1308f434bfdd5be49d03651ea75c3e932420f51141d0186b3e540937d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eff2f0137ce4df3738f227fd593ea624e16d25d885dab7b6a403c3370174e9cb\"" Jan 24 00:56:27.045020 containerd[1450]: time="2026-01-24T00:56:27.044840462Z" level=info msg="StartContainer for \"eff2f0137ce4df3738f227fd593ea624e16d25d885dab7b6a403c3370174e9cb\"" Jan 24 00:56:27.064690 systemd[1]: Started cri-containerd-14f0f8066fa0dcc1522585efaf9fb632a8a2f288379c99b1ce3262431955203f.scope - libcontainer container 14f0f8066fa0dcc1522585efaf9fb632a8a2f288379c99b1ce3262431955203f. Jan 24 00:56:27.073628 systemd[1]: Started cri-containerd-497a7187e81f9c5b3667242e596358eb1ae2554b822dbba16b41fc994121a3c1.scope - libcontainer container 497a7187e81f9c5b3667242e596358eb1ae2554b822dbba16b41fc994121a3c1. Jan 24 00:56:27.078576 systemd[1]: Started cri-containerd-eff2f0137ce4df3738f227fd593ea624e16d25d885dab7b6a403c3370174e9cb.scope - libcontainer container eff2f0137ce4df3738f227fd593ea624e16d25d885dab7b6a403c3370174e9cb. Jan 24 00:56:27.132035 containerd[1450]: time="2026-01-24T00:56:27.131939342Z" level=info msg="StartContainer for \"14f0f8066fa0dcc1522585efaf9fb632a8a2f288379c99b1ce3262431955203f\" returns successfully" Jan 24 00:56:27.137237 containerd[1450]: time="2026-01-24T00:56:27.137132816Z" level=info msg="StartContainer for \"eff2f0137ce4df3738f227fd593ea624e16d25d885dab7b6a403c3370174e9cb\" returns successfully" Jan 24 00:56:27.146368 containerd[1450]: time="2026-01-24T00:56:27.146261039Z" level=info msg="StartContainer for \"497a7187e81f9c5b3667242e596358eb1ae2554b822dbba16b41fc994121a3c1\" returns successfully" Jan 24 00:56:27.256912 kubelet[2114]: I0124 00:56:27.256817 2114 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:56:27.631305 kubelet[2114]: E0124 00:56:27.631232 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:27.631497 kubelet[2114]: E0124 00:56:27.631412 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:27.638662 kubelet[2114]: E0124 00:56:27.638610 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:27.638819 kubelet[2114]: E0124 00:56:27.638774 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:27.641136 kubelet[2114]: E0124 00:56:27.641088 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:27.641272 kubelet[2114]: E0124 00:56:27.641219 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:28.571667 kubelet[2114]: I0124 00:56:28.571629 2114 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:56:28.571667 kubelet[2114]: E0124 00:56:28.571663 2114 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:56:28.586676 kubelet[2114]: E0124 00:56:28.586594 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:28.643974 kubelet[2114]: E0124 00:56:28.643905 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:28.644148 kubelet[2114]: E0124 00:56:28.644112 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:28.644989 kubelet[2114]: E0124 00:56:28.644541 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:28.644989 kubelet[2114]: E0124 00:56:28.644666 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:28.646098 kubelet[2114]: E0124 00:56:28.646065 2114 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:56:28.646197 kubelet[2114]: E0124 00:56:28.646177 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:28.687382 kubelet[2114]: E0124 00:56:28.687280 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:28.788144 kubelet[2114]: E0124 00:56:28.788042 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:28.888949 kubelet[2114]: E0124 00:56:28.888824 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:28.989930 kubelet[2114]: E0124 00:56:28.989791 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:29.090633 kubelet[2114]: E0124 00:56:29.090549 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:29.191982 kubelet[2114]: E0124 00:56:29.191718 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:29.292931 kubelet[2114]: E0124 00:56:29.292793 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:29.393040 kubelet[2114]: E0124 00:56:29.392948 2114 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:29.498220 kubelet[2114]: I0124 00:56:29.498059 2114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:29.509820 kubelet[2114]: I0124 00:56:29.509674 2114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:29.516160 kubelet[2114]: I0124 00:56:29.516079 2114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:29.581788 kubelet[2114]: I0124 00:56:29.581665 2114 apiserver.go:52] "Watching apiserver" Jan 24 00:56:29.595228 kubelet[2114]: I0124 00:56:29.595181 2114 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:56:29.643936 kubelet[2114]: I0124 00:56:29.643705 2114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:29.644556 kubelet[2114]: I0124 00:56:29.644219 2114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:29.644556 kubelet[2114]: I0124 00:56:29.644318 2114 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:29.651487 kubelet[2114]: E0124 00:56:29.651408 2114 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:29.651771 kubelet[2114]: E0124 00:56:29.651696 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:29.651771 kubelet[2114]: E0124 00:56:29.651570 2114 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:29.651844 kubelet[2114]: E0124 00:56:29.651838 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:29.651870 kubelet[2114]: E0124 00:56:29.651514 2114 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:29.652039 kubelet[2114]: E0124 00:56:29.651981 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:30.645772 kubelet[2114]: E0124 00:56:30.645698 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:30.645772 kubelet[2114]: E0124 00:56:30.645750 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:30.646220 kubelet[2114]: E0124 00:56:30.645852 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:30.923034 systemd[1]: Reloading requested from client PID 2404 ('systemctl') (unit session-7.scope)... Jan 24 00:56:30.923072 systemd[1]: Reloading... Jan 24 00:56:31.021518 zram_generator::config[2443]: No configuration found. Jan 24 00:56:31.163069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:56:31.245855 systemd[1]: Reloading finished in 322 ms. Jan 24 00:56:31.291095 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:31.301803 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:56:31.302109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:31.302171 systemd[1]: kubelet.service: Consumed 1.208s CPU time, 131.5M memory peak, 0B memory swap peak. Jan 24 00:56:31.309798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:56:31.478622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:56:31.484127 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:56:31.521740 kubelet[2488]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:31.523478 kubelet[2488]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:56:31.523478 kubelet[2488]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:56:31.523478 kubelet[2488]: I0124 00:56:31.522339 2488 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:56:31.532544 kubelet[2488]: I0124 00:56:31.532410 2488 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:56:31.532544 kubelet[2488]: I0124 00:56:31.532520 2488 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:56:31.532930 kubelet[2488]: I0124 00:56:31.532792 2488 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:56:31.534864 kubelet[2488]: I0124 00:56:31.534825 2488 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:56:31.536973 kubelet[2488]: I0124 00:56:31.536928 2488 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:56:31.541459 kubelet[2488]: E0124 00:56:31.540334 2488 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:56:31.541459 kubelet[2488]: I0124 00:56:31.540358 2488 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:56:31.545619 kubelet[2488]: I0124 00:56:31.545596 2488 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:56:31.545968 kubelet[2488]: I0124 00:56:31.545926 2488 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:56:31.546133 kubelet[2488]: I0124 00:56:31.545976 2488 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:56:31.546234 kubelet[2488]: I0124 00:56:31.546146 2488 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:56:31.546234 kubelet[2488]: I0124 00:56:31.546155 2488 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:56:31.546234 kubelet[2488]: I0124 00:56:31.546198 2488 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:31.546364 kubelet[2488]: I0124 00:56:31.546348 2488 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:56:31.546364 kubelet[2488]: I0124 00:56:31.546362 2488 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:56:31.547241 kubelet[2488]: I0124 00:56:31.546381 2488 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:56:31.547241 kubelet[2488]: I0124 00:56:31.546395 2488 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:56:31.548565 kubelet[2488]: I0124 00:56:31.547770 2488 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:56:31.548565 kubelet[2488]: I0124 00:56:31.548201 2488 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:56:31.551640 kubelet[2488]: I0124 00:56:31.550793 2488 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:56:31.552492 kubelet[2488]: I0124 00:56:31.552355 2488 server.go:1289] "Started kubelet" Jan 24 00:56:31.553112 kubelet[2488]: I0124 00:56:31.553056 2488 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:56:31.559077 kubelet[2488]: I0124 00:56:31.559010 2488 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:56:31.559709 kubelet[2488]: I0124 00:56:31.559534 2488 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:56:31.559709 kubelet[2488]: I0124 00:56:31.559700 2488 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:56:31.561614 kubelet[2488]: I0124 00:56:31.561572 2488 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:56:31.562253 kubelet[2488]: I0124 00:56:31.562210 2488 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:56:31.564695 kubelet[2488]: E0124 00:56:31.564680 2488 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:56:31.564998 kubelet[2488]: E0124 00:56:31.564985 2488 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:56:31.565064 kubelet[2488]: I0124 00:56:31.565055 2488 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:56:31.565311 kubelet[2488]: I0124 00:56:31.565297 2488 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:56:31.565574 kubelet[2488]: I0124 00:56:31.565563 2488 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:56:31.566372 kubelet[2488]: I0124 00:56:31.566352 2488 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:56:31.569126 kubelet[2488]: I0124 00:56:31.569073 2488 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:56:31.569126 kubelet[2488]: I0124 00:56:31.569104 2488 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:56:31.582298 kubelet[2488]: I0124 00:56:31.582235 2488 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:56:31.583924 kubelet[2488]: I0124 00:56:31.583818 2488 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:56:31.583924 kubelet[2488]: I0124 00:56:31.583835 2488 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:56:31.583924 kubelet[2488]: I0124 00:56:31.583852 2488 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:56:31.583924 kubelet[2488]: I0124 00:56:31.583860 2488 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:56:31.584042 kubelet[2488]: E0124 00:56:31.583927 2488 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:56:31.617491 kubelet[2488]: I0124 00:56:31.617408 2488 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:56:31.617647 kubelet[2488]: I0124 00:56:31.617615 2488 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:56:31.617699 kubelet[2488]: I0124 00:56:31.617653 2488 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:56:31.617796 kubelet[2488]: I0124 00:56:31.617769 2488 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:56:31.617824 kubelet[2488]: I0124 00:56:31.617794 2488 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:56:31.617824 kubelet[2488]: I0124 00:56:31.617809 2488 policy_none.go:49] "None policy: Start" Jan 24 00:56:31.617824 kubelet[2488]: I0124 00:56:31.617818 2488 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:56:31.617871 kubelet[2488]: I0124 00:56:31.617829 2488 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:56:31.617958 kubelet[2488]: I0124 00:56:31.617937 2488 state_mem.go:75] "Updated machine memory state" Jan 24 00:56:31.623152 kubelet[2488]: E0124 00:56:31.623104 2488 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:56:31.623363 kubelet[2488]: I0124 00:56:31.623313 2488 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:56:31.623394 kubelet[2488]: I0124 00:56:31.623354 2488 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:56:31.623951 kubelet[2488]: I0124 00:56:31.623664 2488 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:56:31.625335 kubelet[2488]: E0124 00:56:31.625272 2488 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:56:31.685094 kubelet[2488]: I0124 00:56:31.685024 2488 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:31.685307 kubelet[2488]: I0124 00:56:31.685254 2488 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:31.685608 kubelet[2488]: I0124 00:56:31.685506 2488 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.692122 kubelet[2488]: E0124 00:56:31.692057 2488 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:31.693218 kubelet[2488]: E0124 00:56:31.693108 2488 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.693218 kubelet[2488]: E0124 00:56:31.693195 2488 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:31.729871 kubelet[2488]: I0124 00:56:31.729796 2488 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:56:31.739135 kubelet[2488]: I0124 00:56:31.738955 2488 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:56:31.739135 kubelet[2488]: I0124 00:56:31.739119 2488 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:56:31.766600 kubelet[2488]: I0124 00:56:31.766506 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acfa7d644ae53df8ad265502aa1cf937-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"acfa7d644ae53df8ad265502aa1cf937\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:31.867870 kubelet[2488]: I0124 00:56:31.867716 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acfa7d644ae53df8ad265502aa1cf937-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"acfa7d644ae53df8ad265502aa1cf937\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:31.867870 kubelet[2488]: I0124 00:56:31.867757 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acfa7d644ae53df8ad265502aa1cf937-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"acfa7d644ae53df8ad265502aa1cf937\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:31.867870 kubelet[2488]: I0124 00:56:31.867781 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.867870 kubelet[2488]: I0124 00:56:31.867794 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.867870 kubelet[2488]: I0124 00:56:31.867848 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.868077 kubelet[2488]: I0124 00:56:31.867892 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.868077 kubelet[2488]: I0124 00:56:31.867950 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:56:31.868077 kubelet[2488]: I0124 00:56:31.867975 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:31.993638 kubelet[2488]: E0124 00:56:31.993477 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:31.993638 kubelet[2488]: E0124 00:56:31.993494 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:31.993638 kubelet[2488]: E0124 00:56:31.993591 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:32.547526 kubelet[2488]: I0124 00:56:32.547477 2488 apiserver.go:52] "Watching apiserver" Jan 24 00:56:32.566005 kubelet[2488]: I0124 00:56:32.565887 2488 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:56:32.600057 kubelet[2488]: I0124 00:56:32.599978 2488 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:32.600255 kubelet[2488]: E0124 00:56:32.600212 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:32.600403 kubelet[2488]: I0124 00:56:32.600354 2488 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:32.611695 kubelet[2488]: E0124 00:56:32.611638 2488 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:56:32.611886 kubelet[2488]: E0124 00:56:32.611825 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:32.612783 kubelet[2488]: E0124 00:56:32.612738 2488 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:56:32.612890 kubelet[2488]: E0124 00:56:32.612847 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:32.642827 kubelet[2488]: I0124 00:56:32.642762 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.6427204140000002 podStartE2EDuration="3.642720414s" podCreationTimestamp="2026-01-24 00:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:32.628955395 +0000 UTC m=+1.139650787" watchObservedRunningTime="2026-01-24 00:56:32.642720414 +0000 UTC m=+1.153415807" Jan 24 00:56:32.644109 kubelet[2488]: I0124 00:56:32.643830 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.643818785 podStartE2EDuration="3.643818785s" podCreationTimestamp="2026-01-24 00:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:32.643001893 +0000 UTC m=+1.153697305" watchObservedRunningTime="2026-01-24 00:56:32.643818785 +0000 UTC m=+1.154514177" Jan 24 00:56:33.602220 kubelet[2488]: E0124 00:56:33.602153 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:33.602220 kubelet[2488]: E0124 00:56:33.602154 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:35.158684 kubelet[2488]: E0124 00:56:35.158621 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:35.751871 kubelet[2488]: I0124 00:56:35.751840 2488 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:56:35.752290 containerd[1450]: time="2026-01-24T00:56:35.752194327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:56:35.752730 kubelet[2488]: I0124 00:56:35.752475 2488 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:56:36.624139 kubelet[2488]: I0124 00:56:36.624055 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.624035108 podStartE2EDuration="7.624035108s" podCreationTimestamp="2026-01-24 00:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:32.6524778 +0000 UTC m=+1.163173192" watchObservedRunningTime="2026-01-24 00:56:36.624035108 +0000 UTC m=+5.134730500" Jan 24 00:56:36.636209 systemd[1]: Created slice kubepods-besteffort-pod835eb99d_ccf4_4dd6_a8b7_a067918b9bf2.slice - libcontainer container kubepods-besteffort-pod835eb99d_ccf4_4dd6_a8b7_a067918b9bf2.slice. Jan 24 00:56:36.699526 kubelet[2488]: I0124 00:56:36.699355 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/835eb99d-ccf4-4dd6-a8b7-a067918b9bf2-xtables-lock\") pod \"kube-proxy-dnn4t\" (UID: \"835eb99d-ccf4-4dd6-a8b7-a067918b9bf2\") " pod="kube-system/kube-proxy-dnn4t" Jan 24 00:56:36.699526 kubelet[2488]: I0124 00:56:36.699490 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/835eb99d-ccf4-4dd6-a8b7-a067918b9bf2-lib-modules\") pod \"kube-proxy-dnn4t\" (UID: \"835eb99d-ccf4-4dd6-a8b7-a067918b9bf2\") " pod="kube-system/kube-proxy-dnn4t" Jan 24 00:56:36.699811 kubelet[2488]: I0124 00:56:36.699536 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68j4h\" (UniqueName: \"kubernetes.io/projected/835eb99d-ccf4-4dd6-a8b7-a067918b9bf2-kube-api-access-68j4h\") pod \"kube-proxy-dnn4t\" (UID: \"835eb99d-ccf4-4dd6-a8b7-a067918b9bf2\") " pod="kube-system/kube-proxy-dnn4t" Jan 24 00:56:36.699811 kubelet[2488]: I0124 00:56:36.699579 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/835eb99d-ccf4-4dd6-a8b7-a067918b9bf2-kube-proxy\") pod \"kube-proxy-dnn4t\" (UID: \"835eb99d-ccf4-4dd6-a8b7-a067918b9bf2\") " pod="kube-system/kube-proxy-dnn4t" Jan 24 00:56:36.943330 kubelet[2488]: E0124 00:56:36.943249 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:36.944102 containerd[1450]: time="2026-01-24T00:56:36.944026904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnn4t,Uid:835eb99d-ccf4-4dd6-a8b7-a067918b9bf2,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:36.951695 systemd[1]: Created slice kubepods-besteffort-podef393edf_33fc_41ee_b554_611a10d41473.slice - libcontainer container kubepods-besteffort-podef393edf_33fc_41ee_b554_611a10d41473.slice. Jan 24 00:56:36.975116 containerd[1450]: time="2026-01-24T00:56:36.974740376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:36.975116 containerd[1450]: time="2026-01-24T00:56:36.974796892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:36.975116 containerd[1450]: time="2026-01-24T00:56:36.974810136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:36.975116 containerd[1450]: time="2026-01-24T00:56:36.974965626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:37.001837 kubelet[2488]: I0124 00:56:37.001769 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbj7n\" (UniqueName: \"kubernetes.io/projected/ef393edf-33fc-41ee-b554-611a10d41473-kube-api-access-nbj7n\") pod \"tigera-operator-7dcd859c48-jwqc5\" (UID: \"ef393edf-33fc-41ee-b554-611a10d41473\") " pod="tigera-operator/tigera-operator-7dcd859c48-jwqc5" Jan 24 00:56:37.001837 kubelet[2488]: I0124 00:56:37.001816 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef393edf-33fc-41ee-b554-611a10d41473-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jwqc5\" (UID: \"ef393edf-33fc-41ee-b554-611a10d41473\") " pod="tigera-operator/tigera-operator-7dcd859c48-jwqc5" Jan 24 00:56:37.013771 systemd[1]: Started cri-containerd-6cf21e2a94ea6703f1ea9855608eae1a32272b2c70bc97d13bdc51d3b2226a12.scope - libcontainer container 6cf21e2a94ea6703f1ea9855608eae1a32272b2c70bc97d13bdc51d3b2226a12. Jan 24 00:56:37.040397 containerd[1450]: time="2026-01-24T00:56:37.040340750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dnn4t,Uid:835eb99d-ccf4-4dd6-a8b7-a067918b9bf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cf21e2a94ea6703f1ea9855608eae1a32272b2c70bc97d13bdc51d3b2226a12\"" Jan 24 00:56:37.041121 kubelet[2488]: E0124 00:56:37.041071 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:37.051674 containerd[1450]: time="2026-01-24T00:56:37.051612102Z" level=info msg="CreateContainer within sandbox \"6cf21e2a94ea6703f1ea9855608eae1a32272b2c70bc97d13bdc51d3b2226a12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:56:37.071805 containerd[1450]: time="2026-01-24T00:56:37.071737529Z" level=info msg="CreateContainer within sandbox \"6cf21e2a94ea6703f1ea9855608eae1a32272b2c70bc97d13bdc51d3b2226a12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7eb7c21b6c1269debd50cdfa7a9808bdfac1d362bf89eee43752556c588612ff\"" Jan 24 00:56:37.072659 containerd[1450]: time="2026-01-24T00:56:37.072600418Z" level=info msg="StartContainer for \"7eb7c21b6c1269debd50cdfa7a9808bdfac1d362bf89eee43752556c588612ff\"" Jan 24 00:56:37.091634 kubelet[2488]: E0124 00:56:37.091561 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:37.107659 systemd[1]: Started cri-containerd-7eb7c21b6c1269debd50cdfa7a9808bdfac1d362bf89eee43752556c588612ff.scope - libcontainer container 7eb7c21b6c1269debd50cdfa7a9808bdfac1d362bf89eee43752556c588612ff. Jan 24 00:56:37.145384 containerd[1450]: time="2026-01-24T00:56:37.145311011Z" level=info msg="StartContainer for \"7eb7c21b6c1269debd50cdfa7a9808bdfac1d362bf89eee43752556c588612ff\" returns successfully" Jan 24 00:56:37.255404 containerd[1450]: time="2026-01-24T00:56:37.255255630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jwqc5,Uid:ef393edf-33fc-41ee-b554-611a10d41473,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:56:37.288735 containerd[1450]: time="2026-01-24T00:56:37.288208242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:37.288735 containerd[1450]: time="2026-01-24T00:56:37.288273984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:37.288735 containerd[1450]: time="2026-01-24T00:56:37.288310412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:37.288735 containerd[1450]: time="2026-01-24T00:56:37.288583222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:37.315068 systemd[1]: Started cri-containerd-bc5fb1c0cf410bc5e25da5ad5e5873890052231114609b26f660eacdddd85543.scope - libcontainer container bc5fb1c0cf410bc5e25da5ad5e5873890052231114609b26f660eacdddd85543. Jan 24 00:56:37.358782 containerd[1450]: time="2026-01-24T00:56:37.358677259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jwqc5,Uid:ef393edf-33fc-41ee-b554-611a10d41473,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc5fb1c0cf410bc5e25da5ad5e5873890052231114609b26f660eacdddd85543\"" Jan 24 00:56:37.363778 containerd[1450]: time="2026-01-24T00:56:37.363622079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:56:37.612763 kubelet[2488]: E0124 00:56:37.612577 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:37.613306 kubelet[2488]: E0124 00:56:37.613167 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:38.248502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414380166.mount: Deactivated successfully. Jan 24 00:56:38.392167 kubelet[2488]: E0124 00:56:38.391366 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:38.405657 kubelet[2488]: I0124 00:56:38.405293 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dnn4t" podStartSLOduration=2.405279549 podStartE2EDuration="2.405279549s" podCreationTimestamp="2026-01-24 00:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:56:37.624939588 +0000 UTC m=+6.135634980" watchObservedRunningTime="2026-01-24 00:56:38.405279549 +0000 UTC m=+6.915974941" Jan 24 00:56:38.613935 kubelet[2488]: E0124 00:56:38.613772 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:38.614388 kubelet[2488]: E0124 00:56:38.614338 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:39.086775 containerd[1450]: time="2026-01-24T00:56:39.086657744Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:39.087620 containerd[1450]: time="2026-01-24T00:56:39.087530659Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:56:39.089031 containerd[1450]: time="2026-01-24T00:56:39.088925134Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:39.091604 containerd[1450]: time="2026-01-24T00:56:39.091564658Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:39.092925 containerd[1450]: time="2026-01-24T00:56:39.092876179Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.729219156s" Jan 24 00:56:39.093026 containerd[1450]: time="2026-01-24T00:56:39.092933044Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:56:39.097949 containerd[1450]: time="2026-01-24T00:56:39.097911098Z" level=info msg="CreateContainer within sandbox \"bc5fb1c0cf410bc5e25da5ad5e5873890052231114609b26f660eacdddd85543\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:56:39.112052 containerd[1450]: time="2026-01-24T00:56:39.111931590Z" level=info msg="CreateContainer within sandbox \"bc5fb1c0cf410bc5e25da5ad5e5873890052231114609b26f660eacdddd85543\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0fd7efec53dbbaae7ee772727e5f417a2e27909680c71ff4da2856f23024a947\"" Jan 24 00:56:39.113293 containerd[1450]: time="2026-01-24T00:56:39.112573445Z" level=info msg="StartContainer for \"0fd7efec53dbbaae7ee772727e5f417a2e27909680c71ff4da2856f23024a947\"" Jan 24 00:56:39.145655 systemd[1]: Started cri-containerd-0fd7efec53dbbaae7ee772727e5f417a2e27909680c71ff4da2856f23024a947.scope - libcontainer container 0fd7efec53dbbaae7ee772727e5f417a2e27909680c71ff4da2856f23024a947. Jan 24 00:56:39.180377 containerd[1450]: time="2026-01-24T00:56:39.180339413Z" level=info msg="StartContainer for \"0fd7efec53dbbaae7ee772727e5f417a2e27909680c71ff4da2856f23024a947\" returns successfully" Jan 24 00:56:44.410047 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:44.417794 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:44.425987 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:52004.service: Deactivated successfully. Jan 24 00:56:44.430999 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:56:44.431200 systemd[1]: session-7.scope: Consumed 5.358s CPU time, 159.7M memory peak, 0B memory swap peak. Jan 24 00:56:44.435828 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:56:44.437526 systemd-logind[1429]: Removed session 7. Jan 24 00:56:45.168467 kubelet[2488]: E0124 00:56:45.167064 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:45.181143 kubelet[2488]: I0124 00:56:45.180634 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jwqc5" podStartSLOduration=7.44849778 podStartE2EDuration="9.180618378s" podCreationTimestamp="2026-01-24 00:56:36 +0000 UTC" firstStartedPulling="2026-01-24 00:56:37.361781399 +0000 UTC m=+5.872476790" lastFinishedPulling="2026-01-24 00:56:39.093901986 +0000 UTC m=+7.604597388" observedRunningTime="2026-01-24 00:56:39.626109752 +0000 UTC m=+8.136805184" watchObservedRunningTime="2026-01-24 00:56:45.180618378 +0000 UTC m=+13.691313769" Jan 24 00:56:48.709380 systemd[1]: Created slice kubepods-besteffort-pod017ac3b4_714e_43d9_8a26_d0330be3b653.slice - libcontainer container kubepods-besteffort-pod017ac3b4_714e_43d9_8a26_d0330be3b653.slice. Jan 24 00:56:48.789526 kubelet[2488]: I0124 00:56:48.789419 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/017ac3b4-714e-43d9-8a26-d0330be3b653-typha-certs\") pod \"calico-typha-748496bcdd-snqp2\" (UID: \"017ac3b4-714e-43d9-8a26-d0330be3b653\") " pod="calico-system/calico-typha-748496bcdd-snqp2" Jan 24 00:56:48.789526 kubelet[2488]: I0124 00:56:48.789509 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmt64\" (UniqueName: \"kubernetes.io/projected/017ac3b4-714e-43d9-8a26-d0330be3b653-kube-api-access-gmt64\") pod \"calico-typha-748496bcdd-snqp2\" (UID: \"017ac3b4-714e-43d9-8a26-d0330be3b653\") " pod="calico-system/calico-typha-748496bcdd-snqp2" Jan 24 00:56:48.789526 kubelet[2488]: I0124 00:56:48.789534 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/017ac3b4-714e-43d9-8a26-d0330be3b653-tigera-ca-bundle\") pod \"calico-typha-748496bcdd-snqp2\" (UID: \"017ac3b4-714e-43d9-8a26-d0330be3b653\") " pod="calico-system/calico-typha-748496bcdd-snqp2" Jan 24 00:56:48.846543 systemd[1]: Created slice kubepods-besteffort-pod999d63c2_378b_42d0_bfd1_5e845789feba.slice - libcontainer container kubepods-besteffort-pod999d63c2_378b_42d0_bfd1_5e845789feba.slice. Jan 24 00:56:48.890380 kubelet[2488]: I0124 00:56:48.890287 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-cni-net-dir\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890380 kubelet[2488]: I0124 00:56:48.890347 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/999d63c2-378b-42d0-bfd1-5e845789feba-node-certs\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890380 kubelet[2488]: I0124 00:56:48.890365 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-lib-modules\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890606 kubelet[2488]: I0124 00:56:48.890405 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999d63c2-378b-42d0-bfd1-5e845789feba-tigera-ca-bundle\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890606 kubelet[2488]: I0124 00:56:48.890467 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-cni-bin-dir\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890606 kubelet[2488]: I0124 00:56:48.890496 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-var-run-calico\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890606 kubelet[2488]: I0124 00:56:48.890517 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-policysync\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890606 kubelet[2488]: I0124 00:56:48.890533 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-var-lib-calico\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890717 kubelet[2488]: I0124 00:56:48.890546 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-xtables-lock\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890717 kubelet[2488]: I0124 00:56:48.890566 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-cni-log-dir\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890717 kubelet[2488]: I0124 00:56:48.890579 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/999d63c2-378b-42d0-bfd1-5e845789feba-flexvol-driver-host\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.890717 kubelet[2488]: I0124 00:56:48.890595 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswdf\" (UniqueName: \"kubernetes.io/projected/999d63c2-378b-42d0-bfd1-5e845789feba-kube-api-access-nswdf\") pod \"calico-node-sm2k5\" (UID: \"999d63c2-378b-42d0-bfd1-5e845789feba\") " pod="calico-system/calico-node-sm2k5" Jan 24 00:56:48.994995 kubelet[2488]: E0124 00:56:48.994777 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:48.994995 kubelet[2488]: W0124 00:56:48.994816 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:48.994995 kubelet[2488]: E0124 00:56:48.994874 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:48.999490 kubelet[2488]: E0124 00:56:48.998022 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:48.999490 kubelet[2488]: W0124 00:56:48.998040 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:48.999490 kubelet[2488]: E0124 00:56:48.998058 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.003628 kubelet[2488]: E0124 00:56:49.003608 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.003770 kubelet[2488]: W0124 00:56:49.003716 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.003770 kubelet[2488]: E0124 00:56:49.003763 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.015692 kubelet[2488]: E0124 00:56:49.015635 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:49.016392 containerd[1450]: time="2026-01-24T00:56:49.016332590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748496bcdd-snqp2,Uid:017ac3b4-714e-43d9-8a26-d0330be3b653,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:49.039312 kubelet[2488]: E0124 00:56:49.039237 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:56:49.065480 containerd[1450]: time="2026-01-24T00:56:49.064971929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:49.065480 containerd[1450]: time="2026-01-24T00:56:49.065043041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:49.065480 containerd[1450]: time="2026-01-24T00:56:49.065060974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:49.065480 containerd[1450]: time="2026-01-24T00:56:49.065191686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:49.080772 kubelet[2488]: E0124 00:56:49.080745 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.080772 kubelet[2488]: W0124 00:56:49.080768 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.080924 kubelet[2488]: E0124 00:56:49.080791 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.081191 kubelet[2488]: E0124 00:56:49.081155 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.081191 kubelet[2488]: W0124 00:56:49.081170 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.081191 kubelet[2488]: E0124 00:56:49.081186 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.081587 kubelet[2488]: E0124 00:56:49.081556 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.081587 kubelet[2488]: W0124 00:56:49.081568 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.081587 kubelet[2488]: E0124 00:56:49.081581 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.081934 kubelet[2488]: E0124 00:56:49.081909 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.081998 kubelet[2488]: W0124 00:56:49.081938 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.081998 kubelet[2488]: E0124 00:56:49.081950 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.082912 kubelet[2488]: E0124 00:56:49.082749 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.083117 kubelet[2488]: W0124 00:56:49.083046 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.083117 kubelet[2488]: E0124 00:56:49.083076 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.083387 kubelet[2488]: E0124 00:56:49.083343 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.083387 kubelet[2488]: W0124 00:56:49.083357 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.083387 kubelet[2488]: E0124 00:56:49.083370 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.083986 kubelet[2488]: E0124 00:56:49.083913 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.083986 kubelet[2488]: W0124 00:56:49.083927 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.083986 kubelet[2488]: E0124 00:56:49.083985 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.084344 kubelet[2488]: E0124 00:56:49.084250 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.084344 kubelet[2488]: W0124 00:56:49.084258 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.084344 kubelet[2488]: E0124 00:56:49.084266 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.084582 kubelet[2488]: E0124 00:56:49.084558 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.084582 kubelet[2488]: W0124 00:56:49.084580 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.084696 kubelet[2488]: E0124 00:56:49.084589 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.085588 kubelet[2488]: E0124 00:56:49.085517 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.085588 kubelet[2488]: W0124 00:56:49.085542 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.085588 kubelet[2488]: E0124 00:56:49.085552 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.085936 kubelet[2488]: E0124 00:56:49.085823 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.085936 kubelet[2488]: W0124 00:56:49.085867 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.085936 kubelet[2488]: E0124 00:56:49.085876 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.086147 kubelet[2488]: E0124 00:56:49.086124 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.086147 kubelet[2488]: W0124 00:56:49.086132 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.086147 kubelet[2488]: E0124 00:56:49.086142 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.086398 kubelet[2488]: E0124 00:56:49.086367 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.086398 kubelet[2488]: W0124 00:56:49.086384 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.086398 kubelet[2488]: E0124 00:56:49.086392 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.086706 kubelet[2488]: E0124 00:56:49.086675 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.086706 kubelet[2488]: W0124 00:56:49.086692 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.086706 kubelet[2488]: E0124 00:56:49.086700 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.086962 kubelet[2488]: E0124 00:56:49.086934 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.086962 kubelet[2488]: W0124 00:56:49.086952 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.086962 kubelet[2488]: E0124 00:56:49.086960 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.087198 kubelet[2488]: E0124 00:56:49.087170 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.087198 kubelet[2488]: W0124 00:56:49.087187 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.087198 kubelet[2488]: E0124 00:56:49.087195 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.087441 kubelet[2488]: E0124 00:56:49.087414 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.087485 kubelet[2488]: W0124 00:56:49.087465 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.087485 kubelet[2488]: E0124 00:56:49.087483 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.087718 kubelet[2488]: E0124 00:56:49.087698 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.087718 kubelet[2488]: W0124 00:56:49.087713 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.087764 kubelet[2488]: E0124 00:56:49.087721 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.087974 kubelet[2488]: E0124 00:56:49.087955 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.087974 kubelet[2488]: W0124 00:56:49.087971 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.088025 kubelet[2488]: E0124 00:56:49.087978 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.088253 kubelet[2488]: E0124 00:56:49.088200 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.088253 kubelet[2488]: W0124 00:56:49.088219 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.088253 kubelet[2488]: E0124 00:56:49.088226 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.090606 systemd[1]: Started cri-containerd-dfb07da677fd5dfdb62f28cebfbbf3c72c8d31c3fee6fc0e8875d257c9f3e02c.scope - libcontainer container dfb07da677fd5dfdb62f28cebfbbf3c72c8d31c3fee6fc0e8875d257c9f3e02c. Jan 24 00:56:49.093124 kubelet[2488]: E0124 00:56:49.093111 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.093124 kubelet[2488]: W0124 00:56:49.093123 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.093176 kubelet[2488]: E0124 00:56:49.093133 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.093176 kubelet[2488]: I0124 00:56:49.093155 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34330cde-9cb8-45f6-8598-34068565d43c-kubelet-dir\") pod \"csi-node-driver-t469r\" (UID: \"34330cde-9cb8-45f6-8598-34068565d43c\") " pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:49.093899 kubelet[2488]: E0124 00:56:49.093881 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.093899 kubelet[2488]: W0124 00:56:49.093896 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.094018 kubelet[2488]: E0124 00:56:49.093906 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.094018 kubelet[2488]: I0124 00:56:49.093922 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34330cde-9cb8-45f6-8598-34068565d43c-registration-dir\") pod \"csi-node-driver-t469r\" (UID: \"34330cde-9cb8-45f6-8598-34068565d43c\") " pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:49.094187 kubelet[2488]: E0124 00:56:49.094159 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.094187 kubelet[2488]: W0124 00:56:49.094171 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.094187 kubelet[2488]: E0124 00:56:49.094179 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.095065 kubelet[2488]: I0124 00:56:49.095044 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/34330cde-9cb8-45f6-8598-34068565d43c-socket-dir\") pod \"csi-node-driver-t469r\" (UID: \"34330cde-9cb8-45f6-8598-34068565d43c\") " pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:49.096572 kubelet[2488]: E0124 00:56:49.096546 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.096572 kubelet[2488]: W0124 00:56:49.096560 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.096572 kubelet[2488]: E0124 00:56:49.096569 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.096766 kubelet[2488]: I0124 00:56:49.096733 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/34330cde-9cb8-45f6-8598-34068565d43c-varrun\") pod \"csi-node-driver-t469r\" (UID: \"34330cde-9cb8-45f6-8598-34068565d43c\") " pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:49.096905 kubelet[2488]: E0124 00:56:49.096893 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.096929 kubelet[2488]: W0124 00:56:49.096905 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.096929 kubelet[2488]: E0124 00:56:49.096913 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.097182 kubelet[2488]: E0124 00:56:49.097153 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.097182 kubelet[2488]: W0124 00:56:49.097163 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.097182 kubelet[2488]: E0124 00:56:49.097171 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.097494 kubelet[2488]: E0124 00:56:49.097419 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.097494 kubelet[2488]: W0124 00:56:49.097474 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.097494 kubelet[2488]: E0124 00:56:49.097482 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.097732 kubelet[2488]: E0124 00:56:49.097719 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.097732 kubelet[2488]: W0124 00:56:49.097729 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.097785 kubelet[2488]: E0124 00:56:49.097737 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.097785 kubelet[2488]: I0124 00:56:49.097754 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chmgc\" (UniqueName: \"kubernetes.io/projected/34330cde-9cb8-45f6-8598-34068565d43c-kube-api-access-chmgc\") pod \"csi-node-driver-t469r\" (UID: \"34330cde-9cb8-45f6-8598-34068565d43c\") " pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:49.098041 kubelet[2488]: E0124 00:56:49.098001 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.098041 kubelet[2488]: W0124 00:56:49.098012 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.098041 kubelet[2488]: E0124 00:56:49.098021 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.098316 kubelet[2488]: E0124 00:56:49.098273 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.098316 kubelet[2488]: W0124 00:56:49.098283 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.098316 kubelet[2488]: E0124 00:56:49.098291 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.098613 kubelet[2488]: E0124 00:56:49.098578 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.098613 kubelet[2488]: W0124 00:56:49.098602 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.098613 kubelet[2488]: E0124 00:56:49.098610 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.098978 kubelet[2488]: E0124 00:56:49.098955 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.098978 kubelet[2488]: W0124 00:56:49.098975 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.098978 kubelet[2488]: E0124 00:56:49.098983 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.100206 kubelet[2488]: E0124 00:56:49.100184 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.100206 kubelet[2488]: W0124 00:56:49.100197 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.100206 kubelet[2488]: E0124 00:56:49.100207 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.100539 kubelet[2488]: E0124 00:56:49.100509 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.100539 kubelet[2488]: W0124 00:56:49.100533 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.100539 kubelet[2488]: E0124 00:56:49.100542 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.100912 kubelet[2488]: E0124 00:56:49.100882 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.100912 kubelet[2488]: W0124 00:56:49.100893 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.100912 kubelet[2488]: E0124 00:56:49.100902 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.150588 kubelet[2488]: E0124 00:56:49.150518 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:49.151899 containerd[1450]: time="2026-01-24T00:56:49.151829305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sm2k5,Uid:999d63c2-378b-42d0-bfd1-5e845789feba,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:49.161948 containerd[1450]: time="2026-01-24T00:56:49.161666454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748496bcdd-snqp2,Uid:017ac3b4-714e-43d9-8a26-d0330be3b653,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfb07da677fd5dfdb62f28cebfbbf3c72c8d31c3fee6fc0e8875d257c9f3e02c\"" Jan 24 00:56:49.165746 kubelet[2488]: E0124 00:56:49.165650 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:49.169153 containerd[1450]: time="2026-01-24T00:56:49.169107235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:56:49.185246 containerd[1450]: time="2026-01-24T00:56:49.185108261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:49.186913 containerd[1450]: time="2026-01-24T00:56:49.186723173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:49.186913 containerd[1450]: time="2026-01-24T00:56:49.186762765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:49.186913 containerd[1450]: time="2026-01-24T00:56:49.186878801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:49.203072 kubelet[2488]: E0124 00:56:49.202970 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.203072 kubelet[2488]: W0124 00:56:49.202994 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.203072 kubelet[2488]: E0124 00:56:49.203013 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.206272 kubelet[2488]: E0124 00:56:49.206204 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.206378 kubelet[2488]: W0124 00:56:49.206297 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.206378 kubelet[2488]: E0124 00:56:49.206319 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.207615 kubelet[2488]: E0124 00:56:49.207541 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.207615 kubelet[2488]: W0124 00:56:49.207580 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.207615 kubelet[2488]: E0124 00:56:49.207591 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.207950 kubelet[2488]: E0124 00:56:49.207917 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.207950 kubelet[2488]: W0124 00:56:49.207944 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.208046 kubelet[2488]: E0124 00:56:49.207954 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.209202 kubelet[2488]: E0124 00:56:49.209174 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.209202 kubelet[2488]: W0124 00:56:49.209189 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.209202 kubelet[2488]: E0124 00:56:49.209199 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.209560 kubelet[2488]: E0124 00:56:49.209533 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.209619 kubelet[2488]: W0124 00:56:49.209561 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.209619 kubelet[2488]: E0124 00:56:49.209576 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.209925 kubelet[2488]: E0124 00:56:49.209901 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.209925 kubelet[2488]: W0124 00:56:49.209923 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.210014 kubelet[2488]: E0124 00:56:49.209932 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.210552 kubelet[2488]: E0124 00:56:49.210271 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.210552 kubelet[2488]: W0124 00:56:49.210282 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.210552 kubelet[2488]: E0124 00:56:49.210290 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.211643 kubelet[2488]: E0124 00:56:49.210895 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.211643 kubelet[2488]: W0124 00:56:49.210907 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.211643 kubelet[2488]: E0124 00:56:49.210916 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.211643 kubelet[2488]: E0124 00:56:49.211309 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.211643 kubelet[2488]: W0124 00:56:49.211362 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.211643 kubelet[2488]: E0124 00:56:49.211371 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.212475 kubelet[2488]: E0124 00:56:49.212415 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.212573 kubelet[2488]: W0124 00:56:49.212507 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.212573 kubelet[2488]: E0124 00:56:49.212523 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.213612 kubelet[2488]: E0124 00:56:49.213536 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.213887 kubelet[2488]: W0124 00:56:49.213786 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.213887 kubelet[2488]: E0124 00:56:49.213800 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.214300 kubelet[2488]: E0124 00:56:49.214283 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.214894 kubelet[2488]: W0124 00:56:49.214375 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.214894 kubelet[2488]: E0124 00:56:49.214390 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.215471 kubelet[2488]: E0124 00:56:49.215458 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.215578 kubelet[2488]: W0124 00:56:49.215564 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.215683 kubelet[2488]: E0124 00:56:49.215672 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.216139 kubelet[2488]: E0124 00:56:49.216127 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.216216 kubelet[2488]: W0124 00:56:49.216205 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.216310 kubelet[2488]: E0124 00:56:49.216249 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.216686 kubelet[2488]: E0124 00:56:49.216675 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.216852 kubelet[2488]: W0124 00:56:49.216761 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.216852 kubelet[2488]: E0124 00:56:49.216775 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.217387 kubelet[2488]: E0124 00:56:49.217269 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.217387 kubelet[2488]: W0124 00:56:49.217279 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.217387 kubelet[2488]: E0124 00:56:49.217287 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.217675 systemd[1]: Started cri-containerd-ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274.scope - libcontainer container ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274. Jan 24 00:56:49.217800 kubelet[2488]: E0124 00:56:49.217731 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.217800 kubelet[2488]: W0124 00:56:49.217741 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.217800 kubelet[2488]: E0124 00:56:49.217750 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.218059 kubelet[2488]: E0124 00:56:49.218038 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.218059 kubelet[2488]: W0124 00:56:49.218058 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.218138 kubelet[2488]: E0124 00:56:49.218067 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.218409 kubelet[2488]: E0124 00:56:49.218335 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.218620 kubelet[2488]: W0124 00:56:49.218526 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.218620 kubelet[2488]: E0124 00:56:49.218541 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.219210 kubelet[2488]: E0124 00:56:49.219197 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.219508 kubelet[2488]: W0124 00:56:49.219291 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.219508 kubelet[2488]: E0124 00:56:49.219310 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.219814 kubelet[2488]: E0124 00:56:49.219756 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.219957 kubelet[2488]: W0124 00:56:49.219944 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.220076 kubelet[2488]: E0124 00:56:49.220064 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.220750 kubelet[2488]: E0124 00:56:49.220734 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.220907 kubelet[2488]: W0124 00:56:49.220894 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.221016 kubelet[2488]: E0124 00:56:49.220984 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.221550 kubelet[2488]: E0124 00:56:49.221538 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.221878 kubelet[2488]: W0124 00:56:49.221707 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.221878 kubelet[2488]: E0124 00:56:49.221730 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.222347 kubelet[2488]: E0124 00:56:49.222336 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.222401 kubelet[2488]: W0124 00:56:49.222391 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.222512 kubelet[2488]: E0124 00:56:49.222499 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.233075 kubelet[2488]: E0124 00:56:49.233011 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:49.233237 kubelet[2488]: W0124 00:56:49.233218 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:49.233526 kubelet[2488]: E0124 00:56:49.233486 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:49.245346 containerd[1450]: time="2026-01-24T00:56:49.245080963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sm2k5,Uid:999d63c2-378b-42d0-bfd1-5e845789feba,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\"" Jan 24 00:56:49.245911 kubelet[2488]: E0124 00:56:49.245798 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:50.127765 containerd[1450]: time="2026-01-24T00:56:50.127655588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:50.128982 containerd[1450]: time="2026-01-24T00:56:50.128880968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:56:50.130225 containerd[1450]: time="2026-01-24T00:56:50.130161662Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:50.133375 containerd[1450]: time="2026-01-24T00:56:50.133270732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:50.134144 containerd[1450]: time="2026-01-24T00:56:50.134109019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 964.9725ms" Jan 24 00:56:50.134200 containerd[1450]: time="2026-01-24T00:56:50.134154483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:56:50.138939 containerd[1450]: time="2026-01-24T00:56:50.138892903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:56:50.167043 containerd[1450]: time="2026-01-24T00:56:50.166978825Z" level=info msg="CreateContainer within sandbox \"dfb07da677fd5dfdb62f28cebfbbf3c72c8d31c3fee6fc0e8875d257c9f3e02c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:56:50.188958 containerd[1450]: time="2026-01-24T00:56:50.188821264Z" level=info msg="CreateContainer within sandbox \"dfb07da677fd5dfdb62f28cebfbbf3c72c8d31c3fee6fc0e8875d257c9f3e02c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"16092aa19c70951178cb1c42714136eb6345ba80b554e90602b4943020632608\"" Jan 24 00:56:50.192995 containerd[1450]: time="2026-01-24T00:56:50.192388659Z" level=info msg="StartContainer for \"16092aa19c70951178cb1c42714136eb6345ba80b554e90602b4943020632608\"" Jan 24 00:56:50.222646 systemd[1]: Started cri-containerd-16092aa19c70951178cb1c42714136eb6345ba80b554e90602b4943020632608.scope - libcontainer container 16092aa19c70951178cb1c42714136eb6345ba80b554e90602b4943020632608. Jan 24 00:56:50.289715 containerd[1450]: time="2026-01-24T00:56:50.289588765Z" level=info msg="StartContainer for \"16092aa19c70951178cb1c42714136eb6345ba80b554e90602b4943020632608\" returns successfully" Jan 24 00:56:50.585200 kubelet[2488]: E0124 00:56:50.585112 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:56:50.647306 kubelet[2488]: E0124 00:56:50.647269 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:50.697097 kubelet[2488]: E0124 00:56:50.697020 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.697097 kubelet[2488]: W0124 00:56:50.697058 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.697097 kubelet[2488]: E0124 00:56:50.697079 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.697405 kubelet[2488]: E0124 00:56:50.697369 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.697405 kubelet[2488]: W0124 00:56:50.697391 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.697405 kubelet[2488]: E0124 00:56:50.697403 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.697850 kubelet[2488]: E0124 00:56:50.697801 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.697850 kubelet[2488]: W0124 00:56:50.697821 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.697915 kubelet[2488]: E0124 00:56:50.697856 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.698356 kubelet[2488]: E0124 00:56:50.698325 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.698356 kubelet[2488]: W0124 00:56:50.698346 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.698356 kubelet[2488]: E0124 00:56:50.698356 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.698812 kubelet[2488]: E0124 00:56:50.698779 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.698812 kubelet[2488]: W0124 00:56:50.698799 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.698812 kubelet[2488]: E0124 00:56:50.698808 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.699183 kubelet[2488]: E0124 00:56:50.699091 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.699183 kubelet[2488]: W0124 00:56:50.699113 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.699183 kubelet[2488]: E0124 00:56:50.699122 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.699620 kubelet[2488]: E0124 00:56:50.699595 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.699620 kubelet[2488]: W0124 00:56:50.699614 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.699730 kubelet[2488]: E0124 00:56:50.699624 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.700418 kubelet[2488]: E0124 00:56:50.700387 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.700418 kubelet[2488]: W0124 00:56:50.700409 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.700524 kubelet[2488]: E0124 00:56:50.700420 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.700796 kubelet[2488]: E0124 00:56:50.700768 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.700796 kubelet[2488]: W0124 00:56:50.700789 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.700796 kubelet[2488]: E0124 00:56:50.700797 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.701196 kubelet[2488]: E0124 00:56:50.701168 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.701196 kubelet[2488]: W0124 00:56:50.701188 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.701266 kubelet[2488]: E0124 00:56:50.701198 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.701581 kubelet[2488]: E0124 00:56:50.701550 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.701581 kubelet[2488]: W0124 00:56:50.701576 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.701656 kubelet[2488]: E0124 00:56:50.701588 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.702065 kubelet[2488]: E0124 00:56:50.702030 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.702065 kubelet[2488]: W0124 00:56:50.702062 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.702152 kubelet[2488]: E0124 00:56:50.702085 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.702493 kubelet[2488]: E0124 00:56:50.702414 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.702493 kubelet[2488]: W0124 00:56:50.702476 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.702493 kubelet[2488]: E0124 00:56:50.702486 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.702765 kubelet[2488]: E0124 00:56:50.702742 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.702765 kubelet[2488]: W0124 00:56:50.702762 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.702881 kubelet[2488]: E0124 00:56:50.702771 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.703218 kubelet[2488]: E0124 00:56:50.703129 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.703218 kubelet[2488]: W0124 00:56:50.703148 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.703218 kubelet[2488]: E0124 00:56:50.703156 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.721629 kubelet[2488]: E0124 00:56:50.721575 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.721629 kubelet[2488]: W0124 00:56:50.721601 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.721867 kubelet[2488]: E0124 00:56:50.721612 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.722155 kubelet[2488]: E0124 00:56:50.722097 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.722155 kubelet[2488]: W0124 00:56:50.722143 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.722262 kubelet[2488]: E0124 00:56:50.722175 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.722712 kubelet[2488]: E0124 00:56:50.722658 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.722712 kubelet[2488]: W0124 00:56:50.722689 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.722805 kubelet[2488]: E0124 00:56:50.722713 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.723246 kubelet[2488]: E0124 00:56:50.723194 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.723246 kubelet[2488]: W0124 00:56:50.723226 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.723357 kubelet[2488]: E0124 00:56:50.723248 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.723732 kubelet[2488]: E0124 00:56:50.723682 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.723732 kubelet[2488]: W0124 00:56:50.723717 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.723732 kubelet[2488]: E0124 00:56:50.723733 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.724194 kubelet[2488]: E0124 00:56:50.724167 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.724194 kubelet[2488]: W0124 00:56:50.724193 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.724260 kubelet[2488]: E0124 00:56:50.724207 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.724610 kubelet[2488]: E0124 00:56:50.724583 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.724610 kubelet[2488]: W0124 00:56:50.724607 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.724672 kubelet[2488]: E0124 00:56:50.724621 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.725062 kubelet[2488]: E0124 00:56:50.725016 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.725062 kubelet[2488]: W0124 00:56:50.725045 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.725062 kubelet[2488]: E0124 00:56:50.725057 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.725570 kubelet[2488]: E0124 00:56:50.725544 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.725570 kubelet[2488]: W0124 00:56:50.725567 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.725628 kubelet[2488]: E0124 00:56:50.725580 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.726083 kubelet[2488]: E0124 00:56:50.726062 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.726083 kubelet[2488]: W0124 00:56:50.726082 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.726137 kubelet[2488]: E0124 00:56:50.726092 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.726491 kubelet[2488]: E0124 00:56:50.726397 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.726491 kubelet[2488]: W0124 00:56:50.726419 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.726491 kubelet[2488]: E0124 00:56:50.726459 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.726805 kubelet[2488]: E0124 00:56:50.726765 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.726805 kubelet[2488]: W0124 00:56:50.726787 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.726805 kubelet[2488]: E0124 00:56:50.726795 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.727176 kubelet[2488]: E0124 00:56:50.727140 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.727176 kubelet[2488]: W0124 00:56:50.727161 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.727176 kubelet[2488]: E0124 00:56:50.727169 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.727574 kubelet[2488]: E0124 00:56:50.727535 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.727574 kubelet[2488]: W0124 00:56:50.727557 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.727574 kubelet[2488]: E0124 00:56:50.727565 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.728123 kubelet[2488]: E0124 00:56:50.728075 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.728123 kubelet[2488]: W0124 00:56:50.728108 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.728123 kubelet[2488]: E0124 00:56:50.728122 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.728617 kubelet[2488]: E0124 00:56:50.728584 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.728617 kubelet[2488]: W0124 00:56:50.728610 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.728691 kubelet[2488]: E0124 00:56:50.728624 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.729128 kubelet[2488]: E0124 00:56:50.729086 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.729128 kubelet[2488]: W0124 00:56:50.729107 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.729128 kubelet[2488]: E0124 00:56:50.729115 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:50.729501 kubelet[2488]: E0124 00:56:50.729460 2488 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:50.729501 kubelet[2488]: W0124 00:56:50.729485 2488 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:50.729501 kubelet[2488]: E0124 00:56:50.729494 2488 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:51.016844 containerd[1450]: time="2026-01-24T00:56:51.016737373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:51.017650 containerd[1450]: time="2026-01-24T00:56:51.017567038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:56:51.018869 containerd[1450]: time="2026-01-24T00:56:51.018778942Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:51.021020 containerd[1450]: time="2026-01-24T00:56:51.020944992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:51.021746 containerd[1450]: time="2026-01-24T00:56:51.021667908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 882.719402ms" Jan 24 00:56:51.021746 containerd[1450]: time="2026-01-24T00:56:51.021710697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:56:51.026954 containerd[1450]: time="2026-01-24T00:56:51.026911082Z" level=info msg="CreateContainer within sandbox \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:56:51.043166 containerd[1450]: time="2026-01-24T00:56:51.043104342Z" level=info msg="CreateContainer within sandbox \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12\"" Jan 24 00:56:51.043685 containerd[1450]: time="2026-01-24T00:56:51.043629120Z" level=info msg="StartContainer for \"9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12\"" Jan 24 00:56:51.090719 systemd[1]: Started cri-containerd-9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12.scope - libcontainer container 9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12. Jan 24 00:56:51.123931 containerd[1450]: time="2026-01-24T00:56:51.123887421Z" level=info msg="StartContainer for \"9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12\" returns successfully" Jan 24 00:56:51.139251 systemd[1]: cri-containerd-9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12.scope: Deactivated successfully. Jan 24 00:56:51.184755 containerd[1450]: time="2026-01-24T00:56:51.182213200Z" level=info msg="shim disconnected" id=9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12 namespace=k8s.io Jan 24 00:56:51.184755 containerd[1450]: time="2026-01-24T00:56:51.184746316Z" level=warning msg="cleaning up after shim disconnected" id=9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12 namespace=k8s.io Jan 24 00:56:51.185256 containerd[1450]: time="2026-01-24T00:56:51.184767465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:56:51.282527 update_engine[1438]: I20260124 00:56:51.282307 1438 update_attempter.cc:509] Updating boot flags... Jan 24 00:56:51.314506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3232) Jan 24 00:56:51.357848 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3230) Jan 24 00:56:51.398920 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3230) Jan 24 00:56:51.650400 kubelet[2488]: E0124 00:56:51.650342 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:51.651058 kubelet[2488]: I0124 00:56:51.650758 2488 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:56:51.651161 kubelet[2488]: E0124 00:56:51.651137 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:51.651261 containerd[1450]: time="2026-01-24T00:56:51.651171426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:56:51.668301 kubelet[2488]: I0124 00:56:51.668099 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-748496bcdd-snqp2" podStartSLOduration=2.69562305 podStartE2EDuration="3.667994992s" podCreationTimestamp="2026-01-24 00:56:48 +0000 UTC" firstStartedPulling="2026-01-24 00:56:49.166341194 +0000 UTC m=+17.677036586" lastFinishedPulling="2026-01-24 00:56:50.138713137 +0000 UTC m=+18.649408528" observedRunningTime="2026-01-24 00:56:50.665548223 +0000 UTC m=+19.176243614" watchObservedRunningTime="2026-01-24 00:56:51.667994992 +0000 UTC m=+20.178690394" Jan 24 00:56:51.897947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9532670da942d21429a44b4a6dbacb5461791fbf41e5a06945079cda9a316b12-rootfs.mount: Deactivated successfully. Jan 24 00:56:52.585208 kubelet[2488]: E0124 00:56:52.585126 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:56:53.242322 containerd[1450]: time="2026-01-24T00:56:53.242235024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:53.243383 containerd[1450]: time="2026-01-24T00:56:53.243292636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:56:53.244343 containerd[1450]: time="2026-01-24T00:56:53.244284084Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:53.247244 containerd[1450]: time="2026-01-24T00:56:53.247177866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:53.248268 containerd[1450]: time="2026-01-24T00:56:53.248215719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.597013487s" Jan 24 00:56:53.248320 containerd[1450]: time="2026-01-24T00:56:53.248267105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:56:53.254211 containerd[1450]: time="2026-01-24T00:56:53.254137376Z" level=info msg="CreateContainer within sandbox \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:56:53.273609 containerd[1450]: time="2026-01-24T00:56:53.273521010Z" level=info msg="CreateContainer within sandbox \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193\"" Jan 24 00:56:53.274188 containerd[1450]: time="2026-01-24T00:56:53.274131003Z" level=info msg="StartContainer for \"1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193\"" Jan 24 00:56:53.338759 systemd[1]: Started cri-containerd-1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193.scope - libcontainer container 1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193. Jan 24 00:56:53.370839 containerd[1450]: time="2026-01-24T00:56:53.370735114Z" level=info msg="StartContainer for \"1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193\" returns successfully" Jan 24 00:56:53.660004 kubelet[2488]: E0124 00:56:53.659896 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:54.040641 systemd[1]: cri-containerd-1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193.scope: Deactivated successfully. Jan 24 00:56:54.071085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193-rootfs.mount: Deactivated successfully. Jan 24 00:56:54.136000 containerd[1450]: time="2026-01-24T00:56:54.134944944Z" level=info msg="shim disconnected" id=1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193 namespace=k8s.io Jan 24 00:56:54.136000 containerd[1450]: time="2026-01-24T00:56:54.135006076Z" level=warning msg="cleaning up after shim disconnected" id=1f7b74c38a56b635c6e79d56153f0b564a1ecc03f344180ba252a8a79cd79193 namespace=k8s.io Jan 24 00:56:54.136000 containerd[1450]: time="2026-01-24T00:56:54.135021385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:56:54.136341 kubelet[2488]: I0124 00:56:54.134988 2488 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:56:54.198928 systemd[1]: Created slice kubepods-besteffort-pode706493e_7f12_4ad3_8c2a_5a508961b9f4.slice - libcontainer container kubepods-besteffort-pode706493e_7f12_4ad3_8c2a_5a508961b9f4.slice. Jan 24 00:56:54.211011 systemd[1]: Created slice kubepods-besteffort-pod0364261e_0b7f_4a7d_aec9_83adc08c04f8.slice - libcontainer container kubepods-besteffort-pod0364261e_0b7f_4a7d_aec9_83adc08c04f8.slice. Jan 24 00:56:54.220662 systemd[1]: Created slice kubepods-besteffort-pod203aa399_08cf_4bd0_a44a_0a01debc5662.slice - libcontainer container kubepods-besteffort-pod203aa399_08cf_4bd0_a44a_0a01debc5662.slice. Jan 24 00:56:54.228525 systemd[1]: Created slice kubepods-besteffort-pod7fe9f3ea_2686_424b_8279_86ca8e141669.slice - libcontainer container kubepods-besteffort-pod7fe9f3ea_2686_424b_8279_86ca8e141669.slice. Jan 24 00:56:54.237079 systemd[1]: Created slice kubepods-besteffort-pod5b041d1d_1f61_468c_922a_8ff10d433023.slice - libcontainer container kubepods-besteffort-pod5b041d1d_1f61_468c_922a_8ff10d433023.slice. Jan 24 00:56:54.244114 systemd[1]: Created slice kubepods-besteffort-poddaf15e6c_e319_4b6a_b81a_cb796e8f2eb5.slice - libcontainer container kubepods-besteffort-poddaf15e6c_e319_4b6a_b81a_cb796e8f2eb5.slice. Jan 24 00:56:54.248554 kubelet[2488]: I0124 00:56:54.248492 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e706493e-7f12-4ad3-8c2a-5a508961b9f4-calico-apiserver-certs\") pod \"calico-apiserver-5df9b4f89d-2dhf7\" (UID: \"e706493e-7f12-4ad3-8c2a-5a508961b9f4\") " pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" Jan 24 00:56:54.248960 kubelet[2488]: I0124 00:56:54.248559 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2k96\" (UniqueName: \"kubernetes.io/projected/e706493e-7f12-4ad3-8c2a-5a508961b9f4-kube-api-access-b2k96\") pod \"calico-apiserver-5df9b4f89d-2dhf7\" (UID: \"e706493e-7f12-4ad3-8c2a-5a508961b9f4\") " pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" Jan 24 00:56:54.248960 kubelet[2488]: I0124 00:56:54.248587 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-backend-key-pair\") pod \"whisker-65fd74bb7-xjhrv\" (UID: \"5b041d1d-1f61-468c-922a-8ff10d433023\") " pod="calico-system/whisker-65fd74bb7-xjhrv" Jan 24 00:56:54.248960 kubelet[2488]: I0124 00:56:54.248630 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-ca-bundle\") pod \"whisker-65fd74bb7-xjhrv\" (UID: \"5b041d1d-1f61-468c-922a-8ff10d433023\") " pod="calico-system/whisker-65fd74bb7-xjhrv" Jan 24 00:56:54.248960 kubelet[2488]: I0124 00:56:54.248672 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a-config-volume\") pod \"coredns-674b8bbfcf-2tdmj\" (UID: \"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a\") " pod="kube-system/coredns-674b8bbfcf-2tdmj" Jan 24 00:56:54.248960 kubelet[2488]: I0124 00:56:54.248704 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zw9\" (UniqueName: \"kubernetes.io/projected/cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a-kube-api-access-78zw9\") pod \"coredns-674b8bbfcf-2tdmj\" (UID: \"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a\") " pod="kube-system/coredns-674b8bbfcf-2tdmj" Jan 24 00:56:54.249748 kubelet[2488]: I0124 00:56:54.248744 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fe9f3ea-2686-424b-8279-86ca8e141669-calico-apiserver-certs\") pod \"calico-apiserver-5df9b4f89d-kh5pp\" (UID: \"7fe9f3ea-2686-424b-8279-86ca8e141669\") " pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" Jan 24 00:56:54.249748 kubelet[2488]: I0124 00:56:54.248981 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203aa399-08cf-4bd0-a44a-0a01debc5662-goldmane-ca-bundle\") pod \"goldmane-666569f655-fw5bw\" (UID: \"203aa399-08cf-4bd0-a44a-0a01debc5662\") " pod="calico-system/goldmane-666569f655-fw5bw" Jan 24 00:56:54.249748 kubelet[2488]: I0124 00:56:54.249033 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cprhh\" (UniqueName: \"kubernetes.io/projected/0364261e-0b7f-4a7d-aec9-83adc08c04f8-kube-api-access-cprhh\") pod \"calico-kube-controllers-b646d8bfb-nxb7c\" (UID: \"0364261e-0b7f-4a7d-aec9-83adc08c04f8\") " pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" Jan 24 00:56:54.249748 kubelet[2488]: I0124 00:56:54.249063 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/203aa399-08cf-4bd0-a44a-0a01debc5662-config\") pod \"goldmane-666569f655-fw5bw\" (UID: \"203aa399-08cf-4bd0-a44a-0a01debc5662\") " pod="calico-system/goldmane-666569f655-fw5bw" Jan 24 00:56:54.249748 kubelet[2488]: I0124 00:56:54.249524 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/203aa399-08cf-4bd0-a44a-0a01debc5662-goldmane-key-pair\") pod \"goldmane-666569f655-fw5bw\" (UID: \"203aa399-08cf-4bd0-a44a-0a01debc5662\") " pod="calico-system/goldmane-666569f655-fw5bw" Jan 24 00:56:54.250271 kubelet[2488]: I0124 00:56:54.249873 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0364261e-0b7f-4a7d-aec9-83adc08c04f8-tigera-ca-bundle\") pod \"calico-kube-controllers-b646d8bfb-nxb7c\" (UID: \"0364261e-0b7f-4a7d-aec9-83adc08c04f8\") " pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" Jan 24 00:56:54.250271 kubelet[2488]: I0124 00:56:54.250162 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/daf15e6c-e319-4b6a-b81a-cb796e8f2eb5-calico-apiserver-certs\") pod \"calico-apiserver-7599cd6db5-l6tp2\" (UID: \"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5\") " pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" Jan 24 00:56:54.250381 kubelet[2488]: I0124 00:56:54.250182 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8s6f\" (UniqueName: \"kubernetes.io/projected/203aa399-08cf-4bd0-a44a-0a01debc5662-kube-api-access-w8s6f\") pod \"goldmane-666569f655-fw5bw\" (UID: \"203aa399-08cf-4bd0-a44a-0a01debc5662\") " pod="calico-system/goldmane-666569f655-fw5bw" Jan 24 00:56:54.250916 kubelet[2488]: I0124 00:56:54.250568 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/578c81f9-e877-4bb0-855e-4f7e7d4c1973-config-volume\") pod \"coredns-674b8bbfcf-xqktf\" (UID: \"578c81f9-e877-4bb0-855e-4f7e7d4c1973\") " pod="kube-system/coredns-674b8bbfcf-xqktf" Jan 24 00:56:54.250916 kubelet[2488]: I0124 00:56:54.250588 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjv42\" (UniqueName: \"kubernetes.io/projected/daf15e6c-e319-4b6a-b81a-cb796e8f2eb5-kube-api-access-rjv42\") pod \"calico-apiserver-7599cd6db5-l6tp2\" (UID: \"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5\") " pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" Jan 24 00:56:54.250916 kubelet[2488]: I0124 00:56:54.250636 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmz9q\" (UniqueName: \"kubernetes.io/projected/7fe9f3ea-2686-424b-8279-86ca8e141669-kube-api-access-hmz9q\") pod \"calico-apiserver-5df9b4f89d-kh5pp\" (UID: \"7fe9f3ea-2686-424b-8279-86ca8e141669\") " pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" Jan 24 00:56:54.250916 kubelet[2488]: I0124 00:56:54.250665 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkclm\" (UniqueName: \"kubernetes.io/projected/5b041d1d-1f61-468c-922a-8ff10d433023-kube-api-access-nkclm\") pod \"whisker-65fd74bb7-xjhrv\" (UID: \"5b041d1d-1f61-468c-922a-8ff10d433023\") " pod="calico-system/whisker-65fd74bb7-xjhrv" Jan 24 00:56:54.250916 kubelet[2488]: I0124 00:56:54.250679 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsr8m\" (UniqueName: \"kubernetes.io/projected/578c81f9-e877-4bb0-855e-4f7e7d4c1973-kube-api-access-nsr8m\") pod \"coredns-674b8bbfcf-xqktf\" (UID: \"578c81f9-e877-4bb0-855e-4f7e7d4c1973\") " pod="kube-system/coredns-674b8bbfcf-xqktf" Jan 24 00:56:54.251531 systemd[1]: Created slice kubepods-burstable-podcc1bbdfb_ba1e_48a0_8b73_32d52c484b6a.slice - libcontainer container kubepods-burstable-podcc1bbdfb_ba1e_48a0_8b73_32d52c484b6a.slice. Jan 24 00:56:54.260168 systemd[1]: Created slice kubepods-burstable-pod578c81f9_e877_4bb0_855e_4f7e7d4c1973.slice - libcontainer container kubepods-burstable-pod578c81f9_e877_4bb0_855e_4f7e7d4c1973.slice. Jan 24 00:56:54.503722 containerd[1450]: time="2026-01-24T00:56:54.503671255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-2dhf7,Uid:e706493e-7f12-4ad3-8c2a-5a508961b9f4,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:54.517557 containerd[1450]: time="2026-01-24T00:56:54.517502709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b646d8bfb-nxb7c,Uid:0364261e-0b7f-4a7d-aec9-83adc08c04f8,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:54.526941 containerd[1450]: time="2026-01-24T00:56:54.526888181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw5bw,Uid:203aa399-08cf-4bd0-a44a-0a01debc5662,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:54.532862 containerd[1450]: time="2026-01-24T00:56:54.532800885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-kh5pp,Uid:7fe9f3ea-2686-424b-8279-86ca8e141669,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:54.540376 containerd[1450]: time="2026-01-24T00:56:54.540336134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65fd74bb7-xjhrv,Uid:5b041d1d-1f61-468c-922a-8ff10d433023,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:54.551475 containerd[1450]: time="2026-01-24T00:56:54.551368161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7599cd6db5-l6tp2,Uid:daf15e6c-e319-4b6a-b81a-cb796e8f2eb5,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:56:54.556646 kubelet[2488]: E0124 00:56:54.556622 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:54.560255 containerd[1450]: time="2026-01-24T00:56:54.559950080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2tdmj,Uid:cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:54.564894 kubelet[2488]: E0124 00:56:54.564854 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:54.566667 containerd[1450]: time="2026-01-24T00:56:54.566316306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqktf,Uid:578c81f9-e877-4bb0-855e-4f7e7d4c1973,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:54.594013 systemd[1]: Created slice kubepods-besteffort-pod34330cde_9cb8_45f6_8598_34068565d43c.slice - libcontainer container kubepods-besteffort-pod34330cde_9cb8_45f6_8598_34068565d43c.slice. Jan 24 00:56:54.613512 containerd[1450]: time="2026-01-24T00:56:54.613349269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t469r,Uid:34330cde-9cb8-45f6-8598-34068565d43c,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:54.710895 kubelet[2488]: E0124 00:56:54.710767 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:56:54.719160 containerd[1450]: time="2026-01-24T00:56:54.718951428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:56:54.794381 containerd[1450]: time="2026-01-24T00:56:54.793967973Z" level=error msg="Failed to destroy network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.797641 containerd[1450]: time="2026-01-24T00:56:54.797613269Z" level=error msg="encountered an error cleaning up failed sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.797757 containerd[1450]: time="2026-01-24T00:56:54.797736327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65fd74bb7-xjhrv,Uid:5b041d1d-1f61-468c-922a-8ff10d433023,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.798138 kubelet[2488]: E0124 00:56:54.798094 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.798191 kubelet[2488]: E0124 00:56:54.798163 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65fd74bb7-xjhrv" Jan 24 00:56:54.798191 kubelet[2488]: E0124 00:56:54.798183 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65fd74bb7-xjhrv" Jan 24 00:56:54.798286 kubelet[2488]: E0124 00:56:54.798248 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65fd74bb7-xjhrv_calico-system(5b041d1d-1f61-468c-922a-8ff10d433023)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65fd74bb7-xjhrv_calico-system(5b041d1d-1f61-468c-922a-8ff10d433023)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65fd74bb7-xjhrv" podUID="5b041d1d-1f61-468c-922a-8ff10d433023" Jan 24 00:56:54.804760 containerd[1450]: time="2026-01-24T00:56:54.804572743Z" level=error msg="Failed to destroy network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.806271 containerd[1450]: time="2026-01-24T00:56:54.806206546Z" level=error msg="encountered an error cleaning up failed sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.806332 containerd[1450]: time="2026-01-24T00:56:54.806282106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b646d8bfb-nxb7c,Uid:0364261e-0b7f-4a7d-aec9-83adc08c04f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.808706 kubelet[2488]: E0124 00:56:54.806883 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.808706 kubelet[2488]: E0124 00:56:54.807156 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" Jan 24 00:56:54.808706 kubelet[2488]: E0124 00:56:54.807196 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" Jan 24 00:56:54.808847 kubelet[2488]: E0124 00:56:54.807250 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b646d8bfb-nxb7c_calico-system(0364261e-0b7f-4a7d-aec9-83adc08c04f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b646d8bfb-nxb7c_calico-system(0364261e-0b7f-4a7d-aec9-83adc08c04f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:56:54.816665 containerd[1450]: time="2026-01-24T00:56:54.816603336Z" level=error msg="Failed to destroy network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.817092 containerd[1450]: time="2026-01-24T00:56:54.816603907Z" level=error msg="Failed to destroy network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.817735 containerd[1450]: time="2026-01-24T00:56:54.817710220Z" level=error msg="encountered an error cleaning up failed sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.817950 containerd[1450]: time="2026-01-24T00:56:54.817926932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-kh5pp,Uid:7fe9f3ea-2686-424b-8279-86ca8e141669,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.818360 containerd[1450]: time="2026-01-24T00:56:54.818140198Z" level=error msg="encountered an error cleaning up failed sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.818640 kubelet[2488]: E0124 00:56:54.818559 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.818640 kubelet[2488]: E0124 00:56:54.818627 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" Jan 24 00:56:54.818716 kubelet[2488]: E0124 00:56:54.818654 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" Jan 24 00:56:54.818716 kubelet[2488]: E0124 00:56:54.818700 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df9b4f89d-kh5pp_calico-apiserver(7fe9f3ea-2686-424b-8279-86ca8e141669)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df9b4f89d-kh5pp_calico-apiserver(7fe9f3ea-2686-424b-8279-86ca8e141669)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:56:54.818937 containerd[1450]: time="2026-01-24T00:56:54.818794359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw5bw,Uid:203aa399-08cf-4bd0-a44a-0a01debc5662,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.819202 kubelet[2488]: E0124 00:56:54.819129 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.819202 kubelet[2488]: E0124 00:56:54.819174 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fw5bw" Jan 24 00:56:54.819202 kubelet[2488]: E0124 00:56:54.819189 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fw5bw" Jan 24 00:56:54.819284 kubelet[2488]: E0124 00:56:54.819223 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fw5bw_calico-system(203aa399-08cf-4bd0-a44a-0a01debc5662)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fw5bw_calico-system(203aa399-08cf-4bd0-a44a-0a01debc5662)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:56:54.820985 containerd[1450]: time="2026-01-24T00:56:54.820948948Z" level=error msg="Failed to destroy network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.822187 containerd[1450]: time="2026-01-24T00:56:54.822107228Z" level=error msg="encountered an error cleaning up failed sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.822241 containerd[1450]: time="2026-01-24T00:56:54.822207565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-2dhf7,Uid:e706493e-7f12-4ad3-8c2a-5a508961b9f4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.824182 kubelet[2488]: E0124 00:56:54.824123 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.824272 kubelet[2488]: E0124 00:56:54.824237 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" Jan 24 00:56:54.824272 kubelet[2488]: E0124 00:56:54.824256 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" Jan 24 00:56:54.824329 kubelet[2488]: E0124 00:56:54.824293 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df9b4f89d-2dhf7_calico-apiserver(e706493e-7f12-4ad3-8c2a-5a508961b9f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df9b4f89d-2dhf7_calico-apiserver(e706493e-7f12-4ad3-8c2a-5a508961b9f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:56:54.860234 containerd[1450]: time="2026-01-24T00:56:54.860170345Z" level=error msg="Failed to destroy network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.860717 containerd[1450]: time="2026-01-24T00:56:54.860679791Z" level=error msg="encountered an error cleaning up failed sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.860788 containerd[1450]: time="2026-01-24T00:56:54.860747235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t469r,Uid:34330cde-9cb8-45f6-8598-34068565d43c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.862350 containerd[1450]: time="2026-01-24T00:56:54.861679456Z" level=error msg="Failed to destroy network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.862531 kubelet[2488]: E0124 00:56:54.861936 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.862531 kubelet[2488]: E0124 00:56:54.862012 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:54.862531 kubelet[2488]: E0124 00:56:54.862035 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t469r" Jan 24 00:56:54.862624 kubelet[2488]: E0124 00:56:54.862095 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t469r_calico-system(34330cde-9cb8-45f6-8598-34068565d43c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t469r_calico-system(34330cde-9cb8-45f6-8598-34068565d43c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:56:54.863490 containerd[1450]: time="2026-01-24T00:56:54.863239732Z" level=error msg="encountered an error cleaning up failed sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.863490 containerd[1450]: time="2026-01-24T00:56:54.863279466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2tdmj,Uid:cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.863720 kubelet[2488]: E0124 00:56:54.863417 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.863720 kubelet[2488]: E0124 00:56:54.863652 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2tdmj" Jan 24 00:56:54.863897 kubelet[2488]: E0124 00:56:54.863668 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2tdmj" Jan 24 00:56:54.864135 kubelet[2488]: E0124 00:56:54.864034 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2tdmj_kube-system(cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2tdmj_kube-system(cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2tdmj" podUID="cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a" Jan 24 00:56:54.875675 containerd[1450]: time="2026-01-24T00:56:54.875603560Z" level=error msg="Failed to destroy network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.876141 containerd[1450]: time="2026-01-24T00:56:54.876044549Z" level=error msg="encountered an error cleaning up failed sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.876253 containerd[1450]: time="2026-01-24T00:56:54.876148973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7599cd6db5-l6tp2,Uid:daf15e6c-e319-4b6a-b81a-cb796e8f2eb5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.876382 kubelet[2488]: E0124 00:56:54.876343 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.876466 kubelet[2488]: E0124 00:56:54.876384 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" Jan 24 00:56:54.876466 kubelet[2488]: E0124 00:56:54.876418 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" Jan 24 00:56:54.876548 kubelet[2488]: E0124 00:56:54.876516 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7599cd6db5-l6tp2_calico-apiserver(daf15e6c-e319-4b6a-b81a-cb796e8f2eb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7599cd6db5-l6tp2_calico-apiserver(daf15e6c-e319-4b6a-b81a-cb796e8f2eb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:56:54.882405 containerd[1450]: time="2026-01-24T00:56:54.882302782Z" level=error msg="Failed to destroy network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.882756 containerd[1450]: time="2026-01-24T00:56:54.882696864Z" level=error msg="encountered an error cleaning up failed sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.882789 containerd[1450]: time="2026-01-24T00:56:54.882760431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqktf,Uid:578c81f9-e877-4bb0-855e-4f7e7d4c1973,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.883077 kubelet[2488]: E0124 00:56:54.882992 2488 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:54.883077 kubelet[2488]: E0124 00:56:54.883077 2488 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xqktf" Jan 24 00:56:54.883196 kubelet[2488]: E0124 00:56:54.883094 2488 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xqktf" Jan 24 00:56:54.883229 kubelet[2488]: E0124 00:56:54.883202 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xqktf_kube-system(578c81f9-e877-4bb0-855e-4f7e7d4c1973)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xqktf_kube-system(578c81f9-e877-4bb0-855e-4f7e7d4c1973)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xqktf" podUID="578c81f9-e877-4bb0-855e-4f7e7d4c1973" Jan 24 00:56:55.713356 kubelet[2488]: I0124 00:56:55.712614 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:56:55.717601 kubelet[2488]: I0124 00:56:55.716746 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:56:55.726367 containerd[1450]: time="2026-01-24T00:56:55.726301200Z" level=info msg="StopPodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\"" Jan 24 00:56:55.727377 containerd[1450]: time="2026-01-24T00:56:55.727346261Z" level=info msg="Ensure that sandbox d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029 in task-service has been cleanup successfully" Jan 24 00:56:55.731656 containerd[1450]: time="2026-01-24T00:56:55.731590824Z" level=info msg="StopPodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\"" Jan 24 00:56:55.731799 containerd[1450]: time="2026-01-24T00:56:55.731768443Z" level=info msg="Ensure that sandbox fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9 in task-service has been cleanup successfully" Jan 24 00:56:55.732023 kubelet[2488]: I0124 00:56:55.732006 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:56:55.734722 containerd[1450]: time="2026-01-24T00:56:55.734680055Z" level=info msg="StopPodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\"" Jan 24 00:56:55.737235 kubelet[2488]: I0124 00:56:55.737216 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:56:55.738119 containerd[1450]: time="2026-01-24T00:56:55.737895555Z" level=info msg="StopPodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\"" Jan 24 00:56:55.738541 containerd[1450]: time="2026-01-24T00:56:55.738522885Z" level=info msg="Ensure that sandbox f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b in task-service has been cleanup successfully" Jan 24 00:56:55.741278 kubelet[2488]: I0124 00:56:55.740937 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:56:55.741692 containerd[1450]: time="2026-01-24T00:56:55.741356429Z" level=info msg="StopPodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\"" Jan 24 00:56:55.741692 containerd[1450]: time="2026-01-24T00:56:55.741521825Z" level=info msg="Ensure that sandbox c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f in task-service has been cleanup successfully" Jan 24 00:56:55.742586 kubelet[2488]: I0124 00:56:55.742250 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:56:55.743357 containerd[1450]: time="2026-01-24T00:56:55.743317462Z" level=info msg="Ensure that sandbox 384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b in task-service has been cleanup successfully" Jan 24 00:56:55.745174 containerd[1450]: time="2026-01-24T00:56:55.745145669Z" level=info msg="StopPodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\"" Jan 24 00:56:55.745299 containerd[1450]: time="2026-01-24T00:56:55.745258019Z" level=info msg="Ensure that sandbox 23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba in task-service has been cleanup successfully" Jan 24 00:56:55.746399 kubelet[2488]: I0124 00:56:55.746355 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:56:55.748914 containerd[1450]: time="2026-01-24T00:56:55.748893753Z" level=info msg="StopPodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\"" Jan 24 00:56:55.750267 containerd[1450]: time="2026-01-24T00:56:55.750245283Z" level=info msg="Ensure that sandbox 4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607 in task-service has been cleanup successfully" Jan 24 00:56:55.751592 kubelet[2488]: I0124 00:56:55.751573 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:56:55.752309 containerd[1450]: time="2026-01-24T00:56:55.752241765Z" level=info msg="StopPodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\"" Jan 24 00:56:55.752503 containerd[1450]: time="2026-01-24T00:56:55.752419885Z" level=info msg="Ensure that sandbox 08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e in task-service has been cleanup successfully" Jan 24 00:56:55.756345 kubelet[2488]: I0124 00:56:55.756274 2488 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:56:55.758391 containerd[1450]: time="2026-01-24T00:56:55.757695517Z" level=info msg="StopPodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\"" Jan 24 00:56:55.758391 containerd[1450]: time="2026-01-24T00:56:55.757845506Z" level=info msg="Ensure that sandbox b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd in task-service has been cleanup successfully" Jan 24 00:56:55.805180 containerd[1450]: time="2026-01-24T00:56:55.805085923Z" level=error msg="StopPodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" failed" error="failed to destroy network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.805383 kubelet[2488]: E0124 00:56:55.805312 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:56:55.805498 kubelet[2488]: E0124 00:56:55.805361 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029"} Jan 24 00:56:55.805498 kubelet[2488]: E0124 00:56:55.805490 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.805650 kubelet[2488]: E0124 00:56:55.805512 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:56:55.819719 containerd[1450]: time="2026-01-24T00:56:55.819646368Z" level=error msg="StopPodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" failed" error="failed to destroy network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.819934 kubelet[2488]: E0124 00:56:55.819897 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:56:55.820027 kubelet[2488]: E0124 00:56:55.819937 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e"} Jan 24 00:56:55.820027 kubelet[2488]: E0124 00:56:55.819964 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.820027 kubelet[2488]: E0124 00:56:55.819986 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2tdmj" podUID="cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a" Jan 24 00:56:55.824496 containerd[1450]: time="2026-01-24T00:56:55.824451544Z" level=error msg="StopPodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" failed" error="failed to destroy network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.824863 kubelet[2488]: E0124 00:56:55.824731 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:56:55.824863 kubelet[2488]: E0124 00:56:55.824764 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba"} Jan 24 00:56:55.824863 kubelet[2488]: E0124 00:56:55.824791 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fe9f3ea-2686-424b-8279-86ca8e141669\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.824863 kubelet[2488]: E0124 00:56:55.824838 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fe9f3ea-2686-424b-8279-86ca8e141669\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:56:55.835982 containerd[1450]: time="2026-01-24T00:56:55.835780544Z" level=error msg="StopPodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" failed" error="failed to destroy network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.835982 containerd[1450]: time="2026-01-24T00:56:55.835899017Z" level=error msg="StopPodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" failed" error="failed to destroy network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.836138 kubelet[2488]: E0124 00:56:55.836044 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:56:55.836138 kubelet[2488]: E0124 00:56:55.836087 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b"} Jan 24 00:56:55.836138 kubelet[2488]: E0124 00:56:55.836117 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34330cde-9cb8-45f6-8598-34068565d43c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.836271 kubelet[2488]: E0124 00:56:55.836139 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34330cde-9cb8-45f6-8598-34068565d43c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:56:55.836271 kubelet[2488]: E0124 00:56:55.836224 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:56:55.836271 kubelet[2488]: E0124 00:56:55.836267 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9"} Jan 24 00:56:55.836368 kubelet[2488]: E0124 00:56:55.836284 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e706493e-7f12-4ad3-8c2a-5a508961b9f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.836368 kubelet[2488]: E0124 00:56:55.836301 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e706493e-7f12-4ad3-8c2a-5a508961b9f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:56:55.840502 containerd[1450]: time="2026-01-24T00:56:55.839571812Z" level=error msg="StopPodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" failed" error="failed to destroy network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.840545 kubelet[2488]: E0124 00:56:55.839785 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:56:55.840545 kubelet[2488]: E0124 00:56:55.839843 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b"} Jan 24 00:56:55.840545 kubelet[2488]: E0124 00:56:55.839871 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"578c81f9-e877-4bb0-855e-4f7e7d4c1973\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.840545 kubelet[2488]: E0124 00:56:55.839887 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"578c81f9-e877-4bb0-855e-4f7e7d4c1973\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xqktf" podUID="578c81f9-e877-4bb0-855e-4f7e7d4c1973" Jan 24 00:56:55.841346 containerd[1450]: time="2026-01-24T00:56:55.841260799Z" level=error msg="StopPodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" failed" error="failed to destroy network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.841640 kubelet[2488]: E0124 00:56:55.841510 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:56:55.841640 kubelet[2488]: E0124 00:56:55.841625 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd"} Jan 24 00:56:55.841762 kubelet[2488]: E0124 00:56:55.841648 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"203aa399-08cf-4bd0-a44a-0a01debc5662\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.841762 kubelet[2488]: E0124 00:56:55.841665 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"203aa399-08cf-4bd0-a44a-0a01debc5662\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:56:55.842491 containerd[1450]: time="2026-01-24T00:56:55.842412818Z" level=error msg="StopPodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" failed" error="failed to destroy network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.842589 kubelet[2488]: E0124 00:56:55.842559 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:56:55.842673 kubelet[2488]: E0124 00:56:55.842596 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607"} Jan 24 00:56:55.842673 kubelet[2488]: E0124 00:56:55.842614 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0364261e-0b7f-4a7d-aec9-83adc08c04f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.842673 kubelet[2488]: E0124 00:56:55.842629 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0364261e-0b7f-4a7d-aec9-83adc08c04f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:56:55.843848 containerd[1450]: time="2026-01-24T00:56:55.843744290Z" level=error msg="StopPodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" failed" error="failed to destroy network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:55.843996 kubelet[2488]: E0124 00:56:55.843963 2488 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:56:55.844037 kubelet[2488]: E0124 00:56:55.843999 2488 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f"} Jan 24 00:56:55.844037 kubelet[2488]: E0124 00:56:55.844020 2488 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b041d1d-1f61-468c-922a-8ff10d433023\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:55.844120 kubelet[2488]: E0124 00:56:55.844038 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b041d1d-1f61-468c-922a-8ff10d433023\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65fd74bb7-xjhrv" podUID="5b041d1d-1f61-468c-922a-8ff10d433023" Jan 24 00:56:59.474628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120502070.mount: Deactivated successfully. Jan 24 00:56:59.653768 containerd[1450]: time="2026-01-24T00:56:59.653680807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:59.654853 containerd[1450]: time="2026-01-24T00:56:59.654759034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:56:59.656190 containerd[1450]: time="2026-01-24T00:56:59.656137505Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:59.658561 containerd[1450]: time="2026-01-24T00:56:59.658403798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:59.659595 containerd[1450]: time="2026-01-24T00:56:59.659384397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.940394798s" Jan 24 00:56:59.659595 containerd[1450]: time="2026-01-24T00:56:59.659568239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:56:59.674151 containerd[1450]: time="2026-01-24T00:56:59.674033382Z" level=info msg="CreateContainer within sandbox \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:56:59.711697 containerd[1450]: time="2026-01-24T00:56:59.711611795Z" level=info msg="CreateContainer within sandbox \"ea00ddb9c3c4f39f6548bc25b24b686520f804b82d7f3dcded9b9bcb020f1274\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec22a60dff34a0639defc3af11bcd8ff257426e20efb9b54238de16f981fc5eb\"" Jan 24 00:56:59.712505 containerd[1450]: time="2026-01-24T00:56:59.712258217Z" level=info msg="StartContainer for \"ec22a60dff34a0639defc3af11bcd8ff257426e20efb9b54238de16f981fc5eb\"" Jan 24 00:56:59.776706 systemd[1]: Started cri-containerd-ec22a60dff34a0639defc3af11bcd8ff257426e20efb9b54238de16f981fc5eb.scope - libcontainer container ec22a60dff34a0639defc3af11bcd8ff257426e20efb9b54238de16f981fc5eb. Jan 24 00:56:59.869800 containerd[1450]: time="2026-01-24T00:56:59.869688079Z" level=info msg="StartContainer for \"ec22a60dff34a0639defc3af11bcd8ff257426e20efb9b54238de16f981fc5eb\" returns successfully" Jan 24 00:56:59.935790 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:56:59.935968 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:57:00.035801 containerd[1450]: time="2026-01-24T00:57:00.035275960Z" level=info msg="StopPodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\"" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.132 [INFO][3851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.133 [INFO][3851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" iface="eth0" netns="/var/run/netns/cni-1788eb0a-a744-2a8d-4e19-9e61ca42c0d3" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.134 [INFO][3851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" iface="eth0" netns="/var/run/netns/cni-1788eb0a-a744-2a8d-4e19-9e61ca42c0d3" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.135 [INFO][3851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" iface="eth0" netns="/var/run/netns/cni-1788eb0a-a744-2a8d-4e19-9e61ca42c0d3" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.135 [INFO][3851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.135 [INFO][3851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.243 [INFO][3863] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.248 [INFO][3863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.248 [INFO][3863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.262 [WARNING][3863] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.264 [INFO][3863] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.266 [INFO][3863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:00.277510 containerd[1450]: 2026-01-24 00:57:00.272 [INFO][3851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:00.277510 containerd[1450]: time="2026-01-24T00:57:00.276863430Z" level=info msg="TearDown network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" successfully" Jan 24 00:57:00.277510 containerd[1450]: time="2026-01-24T00:57:00.276889599Z" level=info msg="StopPodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" returns successfully" Jan 24 00:57:00.399944 kubelet[2488]: I0124 00:57:00.399847 2488 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkclm\" (UniqueName: \"kubernetes.io/projected/5b041d1d-1f61-468c-922a-8ff10d433023-kube-api-access-nkclm\") pod \"5b041d1d-1f61-468c-922a-8ff10d433023\" (UID: \"5b041d1d-1f61-468c-922a-8ff10d433023\") " Jan 24 00:57:00.399944 kubelet[2488]: I0124 00:57:00.399900 2488 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-ca-bundle\") pod \"5b041d1d-1f61-468c-922a-8ff10d433023\" (UID: \"5b041d1d-1f61-468c-922a-8ff10d433023\") " Jan 24 00:57:00.399944 kubelet[2488]: I0124 00:57:00.399933 2488 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-backend-key-pair\") pod \"5b041d1d-1f61-468c-922a-8ff10d433023\" (UID: \"5b041d1d-1f61-468c-922a-8ff10d433023\") " Jan 24 00:57:00.400611 kubelet[2488]: I0124 00:57:00.400577 2488 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5b041d1d-1f61-468c-922a-8ff10d433023" (UID: "5b041d1d-1f61-468c-922a-8ff10d433023"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:57:00.404774 kubelet[2488]: I0124 00:57:00.404737 2488 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b041d1d-1f61-468c-922a-8ff10d433023-kube-api-access-nkclm" (OuterVolumeSpecName: "kube-api-access-nkclm") pod "5b041d1d-1f61-468c-922a-8ff10d433023" (UID: "5b041d1d-1f61-468c-922a-8ff10d433023"). InnerVolumeSpecName "kube-api-access-nkclm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:57:00.404892 kubelet[2488]: I0124 00:57:00.404738 2488 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5b041d1d-1f61-468c-922a-8ff10d433023" (UID: "5b041d1d-1f61-468c-922a-8ff10d433023"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:57:00.475559 systemd[1]: run-netns-cni\x2d1788eb0a\x2da744\x2d2a8d\x2d4e19\x2d9e61ca42c0d3.mount: Deactivated successfully. Jan 24 00:57:00.475678 systemd[1]: var-lib-kubelet-pods-5b041d1d\x2d1f61\x2d468c\x2d922a\x2d8ff10d433023-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnkclm.mount: Deactivated successfully. Jan 24 00:57:00.475752 systemd[1]: var-lib-kubelet-pods-5b041d1d\x2d1f61\x2d468c\x2d922a\x2d8ff10d433023-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:57:00.500941 kubelet[2488]: I0124 00:57:00.500867 2488 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:57:00.500941 kubelet[2488]: I0124 00:57:00.500912 2488 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nkclm\" (UniqueName: \"kubernetes.io/projected/5b041d1d-1f61-468c-922a-8ff10d433023-kube-api-access-nkclm\") on node \"localhost\" DevicePath \"\"" Jan 24 00:57:00.500941 kubelet[2488]: I0124 00:57:00.500922 2488 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b041d1d-1f61-468c-922a-8ff10d433023-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:57:00.765210 kubelet[2488]: I0124 00:57:00.765024 2488 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:57:00.765563 kubelet[2488]: E0124 00:57:00.765417 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:00.825320 kubelet[2488]: E0124 00:57:00.824361 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:00.825320 kubelet[2488]: E0124 00:57:00.824733 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:00.832228 systemd[1]: Removed slice kubepods-besteffort-pod5b041d1d_1f61_468c_922a_8ff10d433023.slice - libcontainer container kubepods-besteffort-pod5b041d1d_1f61_468c_922a_8ff10d433023.slice. Jan 24 00:57:00.863482 kubelet[2488]: I0124 00:57:00.863265 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sm2k5" podStartSLOduration=2.448068427 podStartE2EDuration="12.863246279s" podCreationTimestamp="2026-01-24 00:56:48 +0000 UTC" firstStartedPulling="2026-01-24 00:56:49.246239644 +0000 UTC m=+17.756935036" lastFinishedPulling="2026-01-24 00:56:59.661417496 +0000 UTC m=+28.172112888" observedRunningTime="2026-01-24 00:57:00.846132087 +0000 UTC m=+29.356827489" watchObservedRunningTime="2026-01-24 00:57:00.863246279 +0000 UTC m=+29.373941672" Jan 24 00:57:00.939672 systemd[1]: Created slice kubepods-besteffort-pod05590686_f70c_407a_ace8_b12a72f3a4b1.slice - libcontainer container kubepods-besteffort-pod05590686_f70c_407a_ace8_b12a72f3a4b1.slice. Jan 24 00:57:01.006702 kubelet[2488]: I0124 00:57:01.006513 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05590686-f70c-407a-ace8-b12a72f3a4b1-whisker-backend-key-pair\") pod \"whisker-5d4995d8c5-4f2ww\" (UID: \"05590686-f70c-407a-ace8-b12a72f3a4b1\") " pod="calico-system/whisker-5d4995d8c5-4f2ww" Jan 24 00:57:01.006702 kubelet[2488]: I0124 00:57:01.006576 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05590686-f70c-407a-ace8-b12a72f3a4b1-whisker-ca-bundle\") pod \"whisker-5d4995d8c5-4f2ww\" (UID: \"05590686-f70c-407a-ace8-b12a72f3a4b1\") " pod="calico-system/whisker-5d4995d8c5-4f2ww" Jan 24 00:57:01.006702 kubelet[2488]: I0124 00:57:01.006623 2488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fwmw\" (UniqueName: \"kubernetes.io/projected/05590686-f70c-407a-ace8-b12a72f3a4b1-kube-api-access-5fwmw\") pod \"whisker-5d4995d8c5-4f2ww\" (UID: \"05590686-f70c-407a-ace8-b12a72f3a4b1\") " pod="calico-system/whisker-5d4995d8c5-4f2ww" Jan 24 00:57:01.245178 containerd[1450]: time="2026-01-24T00:57:01.245095705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4995d8c5-4f2ww,Uid:05590686-f70c-407a-ace8-b12a72f3a4b1,Namespace:calico-system,Attempt:0,}" Jan 24 00:57:01.408833 systemd-networkd[1372]: cali8cd5820c0e8: Link UP Jan 24 00:57:01.413565 systemd-networkd[1372]: cali8cd5820c0e8: Gained carrier Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.280 [INFO][3910] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.294 [INFO][3910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0 whisker-5d4995d8c5- calico-system 05590686-f70c-407a-ace8-b12a72f3a4b1 924 0 2026-01-24 00:57:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d4995d8c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d4995d8c5-4f2ww eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8cd5820c0e8 [] [] }} ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.294 [INFO][3910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.322 [INFO][3924] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" HandleID="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Workload="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.323 [INFO][3924] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" HandleID="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Workload="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d4995d8c5-4f2ww", "timestamp":"2026-01-24 00:57:01.322793912 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.323 [INFO][3924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.323 [INFO][3924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.323 [INFO][3924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.332 [INFO][3924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.343 [INFO][3924] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.349 [INFO][3924] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.352 [INFO][3924] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.356 [INFO][3924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.357 [INFO][3924] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.363 [INFO][3924] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4 Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.369 [INFO][3924] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.374 [INFO][3924] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.374 [INFO][3924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" host="localhost" Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.374 [INFO][3924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:01.428485 containerd[1450]: 2026-01-24 00:57:01.374 [INFO][3924] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" HandleID="k8s-pod-network.b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Workload="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.429184 containerd[1450]: 2026-01-24 00:57:01.380 [INFO][3910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0", GenerateName:"whisker-5d4995d8c5-", Namespace:"calico-system", SelfLink:"", UID:"05590686-f70c-407a-ace8-b12a72f3a4b1", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d4995d8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d4995d8c5-4f2ww", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8cd5820c0e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:01.429184 containerd[1450]: 2026-01-24 00:57:01.381 [INFO][3910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.429184 containerd[1450]: 2026-01-24 00:57:01.381 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cd5820c0e8 ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.429184 containerd[1450]: 2026-01-24 00:57:01.401 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.429184 containerd[1450]: 2026-01-24 00:57:01.403 [INFO][3910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0", GenerateName:"whisker-5d4995d8c5-", Namespace:"calico-system", SelfLink:"", UID:"05590686-f70c-407a-ace8-b12a72f3a4b1", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d4995d8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4", Pod:"whisker-5d4995d8c5-4f2ww", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8cd5820c0e8", MAC:"fa:e8:78:ff:f2:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:01.429184 containerd[1450]: 2026-01-24 00:57:01.420 [INFO][3910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4" Namespace="calico-system" Pod="whisker-5d4995d8c5-4f2ww" WorkloadEndpoint="localhost-k8s-whisker--5d4995d8c5--4f2ww-eth0" Jan 24 00:57:01.488336 containerd[1450]: time="2026-01-24T00:57:01.487558213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:01.488336 containerd[1450]: time="2026-01-24T00:57:01.487624717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:01.488336 containerd[1450]: time="2026-01-24T00:57:01.487635206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:01.488336 containerd[1450]: time="2026-01-24T00:57:01.487726426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:01.524591 systemd[1]: Started cri-containerd-b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4.scope - libcontainer container b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4. Jan 24 00:57:01.555389 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:01.589592 kubelet[2488]: I0124 00:57:01.589534 2488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b041d1d-1f61-468c-922a-8ff10d433023" path="/var/lib/kubelet/pods/5b041d1d-1f61-468c-922a-8ff10d433023/volumes" Jan 24 00:57:01.624512 containerd[1450]: time="2026-01-24T00:57:01.623702972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4995d8c5-4f2ww,Uid:05590686-f70c-407a-ace8-b12a72f3a4b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6725839af846db2e9eb3b0a6c402a936e0b15337c57aa9892e0d2a0e0db86a4\"" Jan 24 00:57:01.628012 containerd[1450]: time="2026-01-24T00:57:01.627911158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:01.659567 kernel: bpftool[4108]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:57:01.692524 containerd[1450]: time="2026-01-24T00:57:01.691232605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:01.701132 containerd[1450]: time="2026-01-24T00:57:01.693883961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:01.701229 containerd[1450]: time="2026-01-24T00:57:01.694289924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:57:01.701257 kubelet[2488]: E0124 00:57:01.701231 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:01.701311 kubelet[2488]: E0124 00:57:01.701274 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:01.701456 kubelet[2488]: E0124 00:57:01.701398 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:94fe4a6d3bbc42d590b86714a22fd0ec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d4995d8c5-4f2ww_calico-system(05590686-f70c-407a-ace8-b12a72f3a4b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:01.703196 containerd[1450]: time="2026-01-24T00:57:01.703141669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:01.763472 containerd[1450]: time="2026-01-24T00:57:01.763347524Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:01.764878 containerd[1450]: time="2026-01-24T00:57:01.764760341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:01.764878 containerd[1450]: time="2026-01-24T00:57:01.764796648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:01.765058 kubelet[2488]: E0124 00:57:01.765025 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:01.765164 kubelet[2488]: E0124 00:57:01.765071 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:01.765258 kubelet[2488]: E0124 00:57:01.765177 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d4995d8c5-4f2ww_calico-system(05590686-f70c-407a-ace8-b12a72f3a4b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:01.766867 kubelet[2488]: E0124 00:57:01.766782 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d4995d8c5-4f2ww" podUID="05590686-f70c-407a-ace8-b12a72f3a4b1" Jan 24 00:57:01.828313 kubelet[2488]: E0124 00:57:01.828023 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:01.830307 kubelet[2488]: E0124 00:57:01.830191 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d4995d8c5-4f2ww" podUID="05590686-f70c-407a-ace8-b12a72f3a4b1" Jan 24 00:57:01.937682 systemd-networkd[1372]: vxlan.calico: Link UP Jan 24 00:57:01.937693 systemd-networkd[1372]: vxlan.calico: Gained carrier Jan 24 00:57:02.832776 kubelet[2488]: E0124 00:57:02.832698 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:02.836257 kubelet[2488]: E0124 00:57:02.834734 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d4995d8c5-4f2ww" podUID="05590686-f70c-407a-ace8-b12a72f3a4b1" Jan 24 00:57:02.946713 systemd-networkd[1372]: cali8cd5820c0e8: Gained IPv6LL Jan 24 00:57:03.074700 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Jan 24 00:57:07.586401 containerd[1450]: time="2026-01-24T00:57:07.585514365Z" level=info msg="StopPodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\"" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.635 [INFO][4254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.635 [INFO][4254] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" iface="eth0" netns="/var/run/netns/cni-1d1c50b3-515d-517d-f9b9-583c7a73e51f" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.635 [INFO][4254] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" iface="eth0" netns="/var/run/netns/cni-1d1c50b3-515d-517d-f9b9-583c7a73e51f" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.636 [INFO][4254] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" iface="eth0" netns="/var/run/netns/cni-1d1c50b3-515d-517d-f9b9-583c7a73e51f" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.636 [INFO][4254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.636 [INFO][4254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.666 [INFO][4263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.667 [INFO][4263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.667 [INFO][4263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.674 [WARNING][4263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.674 [INFO][4263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.676 [INFO][4263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:07.682178 containerd[1450]: 2026-01-24 00:57:07.678 [INFO][4254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:07.682813 containerd[1450]: time="2026-01-24T00:57:07.682486226Z" level=info msg="TearDown network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" successfully" Jan 24 00:57:07.682813 containerd[1450]: time="2026-01-24T00:57:07.682513406Z" level=info msg="StopPodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" returns successfully" Jan 24 00:57:07.684142 containerd[1450]: time="2026-01-24T00:57:07.684112474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-2dhf7,Uid:e706493e-7f12-4ad3-8c2a-5a508961b9f4,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:57:07.685483 systemd[1]: run-netns-cni\x2d1d1c50b3\x2d515d\x2d517d\x2df9b9\x2d583c7a73e51f.mount: Deactivated successfully. Jan 24 00:57:07.847136 systemd-networkd[1372]: calie8de8bdbef4: Link UP Jan 24 00:57:07.847537 systemd-networkd[1372]: calie8de8bdbef4: Gained carrier Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.746 [INFO][4271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0 calico-apiserver-5df9b4f89d- calico-apiserver e706493e-7f12-4ad3-8c2a-5a508961b9f4 969 0 2026-01-24 00:56:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df9b4f89d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5df9b4f89d-2dhf7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie8de8bdbef4 [] [] }} ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.747 [INFO][4271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.791 [INFO][4285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" HandleID="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.791 [INFO][4285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" HandleID="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc0e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5df9b4f89d-2dhf7", "timestamp":"2026-01-24 00:57:07.791566173 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.792 [INFO][4285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.792 [INFO][4285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.792 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.801 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.812 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.817 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.820 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.823 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.823 [INFO][4285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.825 [INFO][4285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.831 [INFO][4285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.840 [INFO][4285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.840 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" host="localhost" Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.840 [INFO][4285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:07.861960 containerd[1450]: 2026-01-24 00:57:07.840 [INFO][4285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" HandleID="k8s-pod-network.5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.862788 containerd[1450]: 2026-01-24 00:57:07.843 [INFO][4271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e706493e-7f12-4ad3-8c2a-5a508961b9f4", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5df9b4f89d-2dhf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8de8bdbef4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:07.862788 containerd[1450]: 2026-01-24 00:57:07.844 [INFO][4271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.862788 containerd[1450]: 2026-01-24 00:57:07.844 [INFO][4271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8de8bdbef4 ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.862788 containerd[1450]: 2026-01-24 00:57:07.848 [INFO][4271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.862788 containerd[1450]: 2026-01-24 00:57:07.848 [INFO][4271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e706493e-7f12-4ad3-8c2a-5a508961b9f4", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd", Pod:"calico-apiserver-5df9b4f89d-2dhf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8de8bdbef4", MAC:"ea:79:84:3a:40:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:07.862788 containerd[1450]: 2026-01-24 00:57:07.858 [INFO][4271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-2dhf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:07.889924 containerd[1450]: time="2026-01-24T00:57:07.889730277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:07.889924 containerd[1450]: time="2026-01-24T00:57:07.889807841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:07.889924 containerd[1450]: time="2026-01-24T00:57:07.889824792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:07.890080 containerd[1450]: time="2026-01-24T00:57:07.889958902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:07.928772 systemd[1]: Started cri-containerd-5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd.scope - libcontainer container 5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd. Jan 24 00:57:07.946321 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:07.978299 containerd[1450]: time="2026-01-24T00:57:07.978184799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-2dhf7,Uid:e706493e-7f12-4ad3-8c2a-5a508961b9f4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd\"" Jan 24 00:57:07.980547 containerd[1450]: time="2026-01-24T00:57:07.980500389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:08.042240 containerd[1450]: time="2026-01-24T00:57:08.042140982Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:08.044034 containerd[1450]: time="2026-01-24T00:57:08.043912581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:08.044034 containerd[1450]: time="2026-01-24T00:57:08.043961567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:08.044339 kubelet[2488]: E0124 00:57:08.044264 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:08.044339 kubelet[2488]: E0124 00:57:08.044322 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:08.044760 kubelet[2488]: E0124 00:57:08.044531 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2k96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df9b4f89d-2dhf7_calico-apiserver(e706493e-7f12-4ad3-8c2a-5a508961b9f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:08.045966 kubelet[2488]: E0124 00:57:08.045921 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:57:08.585290 containerd[1450]: time="2026-01-24T00:57:08.585221720Z" level=info msg="StopPodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\"" Jan 24 00:57:08.585290 containerd[1450]: time="2026-01-24T00:57:08.585261063Z" level=info msg="StopPodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\"" Jan 24 00:57:08.585690 containerd[1450]: time="2026-01-24T00:57:08.585639317Z" level=info msg="StopPodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\"" Jan 24 00:57:08.585894 containerd[1450]: time="2026-01-24T00:57:08.585221819Z" level=info msg="StopPodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\"" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.656 [INFO][4385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.656 [INFO][4385] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" iface="eth0" netns="/var/run/netns/cni-71cc1014-339b-7279-8058-8571419be93c" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.657 [INFO][4385] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" iface="eth0" netns="/var/run/netns/cni-71cc1014-339b-7279-8058-8571419be93c" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.657 [INFO][4385] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" iface="eth0" netns="/var/run/netns/cni-71cc1014-339b-7279-8058-8571419be93c" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.657 [INFO][4385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.657 [INFO][4385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.705 [INFO][4416] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.705 [INFO][4416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.705 [INFO][4416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.713 [WARNING][4416] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.713 [INFO][4416] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.716 [INFO][4416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:08.724763 containerd[1450]: 2026-01-24 00:57:08.720 [INFO][4385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:08.727125 containerd[1450]: time="2026-01-24T00:57:08.726805938Z" level=info msg="TearDown network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" successfully" Jan 24 00:57:08.729295 systemd[1]: run-netns-cni\x2d71cc1014\x2d339b\x2d7279\x2d8058\x2d8571419be93c.mount: Deactivated successfully. Jan 24 00:57:08.731038 containerd[1450]: time="2026-01-24T00:57:08.730029000Z" level=info msg="StopPodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" returns successfully" Jan 24 00:57:08.731101 kubelet[2488]: E0124 00:57:08.730961 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:08.732539 containerd[1450]: time="2026-01-24T00:57:08.732334278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2tdmj,Uid:cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a,Namespace:kube-system,Attempt:1,}" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.690 [INFO][4384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.690 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" iface="eth0" netns="/var/run/netns/cni-64f6576f-97d6-8a05-cb57-c881e960ccf2" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.690 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" iface="eth0" netns="/var/run/netns/cni-64f6576f-97d6-8a05-cb57-c881e960ccf2" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.690 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" iface="eth0" netns="/var/run/netns/cni-64f6576f-97d6-8a05-cb57-c881e960ccf2" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.690 [INFO][4384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.690 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.717 [INFO][4426] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.717 [INFO][4426] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.717 [INFO][4426] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.728 [WARNING][4426] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.728 [INFO][4426] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.733 [INFO][4426] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:08.741940 containerd[1450]: 2026-01-24 00:57:08.738 [INFO][4384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:08.744831 containerd[1450]: time="2026-01-24T00:57:08.744743334Z" level=info msg="TearDown network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" successfully" Jan 24 00:57:08.744831 containerd[1450]: time="2026-01-24T00:57:08.744807804Z" level=info msg="StopPodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" returns successfully" Jan 24 00:57:08.746639 containerd[1450]: time="2026-01-24T00:57:08.745669408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7599cd6db5-l6tp2,Uid:daf15e6c-e319-4b6a-b81a-cb796e8f2eb5,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:57:08.750004 systemd[1]: run-netns-cni\x2d64f6576f\x2d97d6\x2d8a05\x2dcb57\x2dc881e960ccf2.mount: Deactivated successfully. Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.685 [INFO][4386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.687 [INFO][4386] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" iface="eth0" netns="/var/run/netns/cni-ad234a7c-9cac-043a-524c-5ce1188a781f" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.687 [INFO][4386] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" iface="eth0" netns="/var/run/netns/cni-ad234a7c-9cac-043a-524c-5ce1188a781f" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.688 [INFO][4386] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" iface="eth0" netns="/var/run/netns/cni-ad234a7c-9cac-043a-524c-5ce1188a781f" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.688 [INFO][4386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.689 [INFO][4386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.717 [INFO][4431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.717 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.734 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.742 [WARNING][4431] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.742 [INFO][4431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.746 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:08.757321 containerd[1450]: 2026-01-24 00:57:08.750 [INFO][4386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:08.757321 containerd[1450]: time="2026-01-24T00:57:08.753619374Z" level=info msg="TearDown network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" successfully" Jan 24 00:57:08.757321 containerd[1450]: time="2026-01-24T00:57:08.753648037Z" level=info msg="StopPodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" returns successfully" Jan 24 00:57:08.757321 containerd[1450]: time="2026-01-24T00:57:08.754327994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t469r,Uid:34330cde-9cb8-45f6-8598-34068565d43c,Namespace:calico-system,Attempt:1,}" Jan 24 00:57:08.757132 systemd[1]: run-netns-cni\x2dad234a7c\x2d9cac\x2d043a\x2d524c\x2d5ce1188a781f.mount: Deactivated successfully. Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.682 [INFO][4392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.683 [INFO][4392] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" iface="eth0" netns="/var/run/netns/cni-c5d4ec93-9874-44df-7b3e-73a1240668ab" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.684 [INFO][4392] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" iface="eth0" netns="/var/run/netns/cni-c5d4ec93-9874-44df-7b3e-73a1240668ab" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.684 [INFO][4392] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" iface="eth0" netns="/var/run/netns/cni-c5d4ec93-9874-44df-7b3e-73a1240668ab" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.684 [INFO][4392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.684 [INFO][4392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.732 [INFO][4423] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.732 [INFO][4423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.746 [INFO][4423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.759 [WARNING][4423] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.759 [INFO][4423] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.766 [INFO][4423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:08.773654 containerd[1450]: 2026-01-24 00:57:08.771 [INFO][4392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:08.774647 containerd[1450]: time="2026-01-24T00:57:08.774390358Z" level=info msg="TearDown network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" successfully" Jan 24 00:57:08.774647 containerd[1450]: time="2026-01-24T00:57:08.774416266Z" level=info msg="StopPodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" returns successfully" Jan 24 00:57:08.775763 containerd[1450]: time="2026-01-24T00:57:08.775600622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw5bw,Uid:203aa399-08cf-4bd0-a44a-0a01debc5662,Namespace:calico-system,Attempt:1,}" Jan 24 00:57:08.777216 systemd[1]: run-netns-cni\x2dc5d4ec93\x2d9874\x2d44df\x2d7b3e\x2d73a1240668ab.mount: Deactivated successfully. Jan 24 00:57:08.853592 kubelet[2488]: E0124 00:57:08.850760 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:57:08.945025 systemd-networkd[1372]: cali01c461dd15a: Link UP Jan 24 00:57:08.946107 systemd-networkd[1372]: cali01c461dd15a: Gained carrier Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.819 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0 coredns-674b8bbfcf- kube-system cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a 981 0 2026-01-24 00:56:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2tdmj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01c461dd15a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.820 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.891 [INFO][4502] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" HandleID="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.892 [INFO][4502] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" HandleID="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2tdmj", "timestamp":"2026-01-24 00:57:08.891978963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.892 [INFO][4502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.892 [INFO][4502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.892 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.902 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.912 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.919 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.922 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.924 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.924 [INFO][4502] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.926 [INFO][4502] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9 Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.931 [INFO][4502] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.939 [INFO][4502] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.939 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" host="localhost" Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.939 [INFO][4502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:08.965347 containerd[1450]: 2026-01-24 00:57:08.939 [INFO][4502] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" HandleID="k8s-pod-network.0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.966337 containerd[1450]: 2026-01-24 00:57:08.941 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2tdmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01c461dd15a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:08.966337 containerd[1450]: 2026-01-24 00:57:08.941 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.966337 containerd[1450]: 2026-01-24 00:57:08.941 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01c461dd15a ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.966337 containerd[1450]: 2026-01-24 00:57:08.946 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.966337 containerd[1450]: 2026-01-24 00:57:08.947 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9", Pod:"coredns-674b8bbfcf-2tdmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01c461dd15a", MAC:"ae:55:ff:3d:d8:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:08.966337 containerd[1450]: 2026-01-24 00:57:08.960 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2tdmj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:08.994034 containerd[1450]: time="2026-01-24T00:57:08.993293185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:08.994034 containerd[1450]: time="2026-01-24T00:57:08.993545636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:08.994034 containerd[1450]: time="2026-01-24T00:57:08.993617290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:08.994034 containerd[1450]: time="2026-01-24T00:57:08.993910816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.017621 systemd[1]: Started cri-containerd-0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9.scope - libcontainer container 0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9. Jan 24 00:57:09.033347 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:09.060625 systemd-networkd[1372]: cali143bfb665be: Link UP Jan 24 00:57:09.064574 systemd-networkd[1372]: cali143bfb665be: Gained carrier Jan 24 00:57:09.076079 containerd[1450]: time="2026-01-24T00:57:09.075909259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2tdmj,Uid:cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a,Namespace:kube-system,Attempt:1,} returns sandbox id \"0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9\"" Jan 24 00:57:09.077173 kubelet[2488]: E0124 00:57:09.077151 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.842 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t469r-eth0 csi-node-driver- calico-system 34330cde-9cb8-45f6-8598-34068565d43c 983 0 2026-01-24 00:56:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t469r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali143bfb665be [] [] }} ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.842 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.917 [INFO][4509] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" HandleID="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.919 [INFO][4509] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" HandleID="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Workload="localhost-k8s-csi--node--driver--t469r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t469r", "timestamp":"2026-01-24 00:57:08.917176684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.919 [INFO][4509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.939 [INFO][4509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:08.939 [INFO][4509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.003 [INFO][4509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.014 [INFO][4509] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.021 [INFO][4509] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.025 [INFO][4509] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.029 [INFO][4509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.029 [INFO][4509] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.030 [INFO][4509] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.037 [INFO][4509] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.048 [INFO][4509] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.048 [INFO][4509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" host="localhost" Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.048 [INFO][4509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:09.084565 containerd[1450]: 2026-01-24 00:57:09.048 [INFO][4509] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" HandleID="k8s-pod-network.61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.087097 containerd[1450]: 2026-01-24 00:57:09.052 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t469r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34330cde-9cb8-45f6-8598-34068565d43c", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t469r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143bfb665be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.087097 containerd[1450]: 2026-01-24 00:57:09.052 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.087097 containerd[1450]: 2026-01-24 00:57:09.052 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali143bfb665be ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.087097 containerd[1450]: 2026-01-24 00:57:09.066 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.087097 containerd[1450]: 2026-01-24 00:57:09.067 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t469r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34330cde-9cb8-45f6-8598-34068565d43c", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c", Pod:"csi-node-driver-t469r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143bfb665be", MAC:"e6:f9:62:ed:6b:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.087097 containerd[1450]: 2026-01-24 00:57:09.080 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c" Namespace="calico-system" Pod="csi-node-driver-t469r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:09.087566 containerd[1450]: time="2026-01-24T00:57:09.087300341Z" level=info msg="CreateContainer within sandbox \"0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:09.123581 containerd[1450]: time="2026-01-24T00:57:09.121769988Z" level=info msg="CreateContainer within sandbox \"0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c09edc30d9e8caf3de89efa23813ec6b9b20c2634f483da4767bc1b84b8ba7b\"" Jan 24 00:57:09.123581 containerd[1450]: time="2026-01-24T00:57:09.123172822Z" level=info msg="StartContainer for \"1c09edc30d9e8caf3de89efa23813ec6b9b20c2634f483da4767bc1b84b8ba7b\"" Jan 24 00:57:09.139833 containerd[1450]: time="2026-01-24T00:57:09.139528162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:09.139833 containerd[1450]: time="2026-01-24T00:57:09.139646623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:09.141303 containerd[1450]: time="2026-01-24T00:57:09.139689122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.141303 containerd[1450]: time="2026-01-24T00:57:09.139776264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.156014 systemd-networkd[1372]: calie8de8bdbef4: Gained IPv6LL Jan 24 00:57:09.163618 systemd[1]: Started cri-containerd-1c09edc30d9e8caf3de89efa23813ec6b9b20c2634f483da4767bc1b84b8ba7b.scope - libcontainer container 1c09edc30d9e8caf3de89efa23813ec6b9b20c2634f483da4767bc1b84b8ba7b. Jan 24 00:57:09.169462 systemd[1]: Started cri-containerd-61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c.scope - libcontainer container 61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c. Jan 24 00:57:09.176601 systemd-networkd[1372]: calie0f482d7467: Link UP Jan 24 00:57:09.176887 systemd-networkd[1372]: calie0f482d7467: Gained carrier Jan 24 00:57:09.195672 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:08.874 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--fw5bw-eth0 goldmane-666569f655- calico-system 203aa399-08cf-4bd0-a44a-0a01debc5662 982 0 2026-01-24 00:56:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-fw5bw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie0f482d7467 [] [] }} ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:08.874 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:08.927 [INFO][4520] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" HandleID="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:08.927 [INFO][4520] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" HandleID="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-fw5bw", "timestamp":"2026-01-24 00:57:08.927394401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:08.928 [INFO][4520] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.048 [INFO][4520] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.049 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.104 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.116 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.134 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.136 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.140 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.140 [INFO][4520] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.143 [INFO][4520] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478 Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.152 [INFO][4520] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.162 [INFO][4520] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.162 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" host="localhost" Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.162 [INFO][4520] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:09.197063 containerd[1450]: 2026-01-24 00:57:09.162 [INFO][4520] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" HandleID="k8s-pod-network.1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.197641 containerd[1450]: 2026-01-24 00:57:09.167 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fw5bw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"203aa399-08cf-4bd0-a44a-0a01debc5662", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-fw5bw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0f482d7467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.197641 containerd[1450]: 2026-01-24 00:57:09.170 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.197641 containerd[1450]: 2026-01-24 00:57:09.171 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0f482d7467 ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.197641 containerd[1450]: 2026-01-24 00:57:09.176 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.197641 containerd[1450]: 2026-01-24 00:57:09.177 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fw5bw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"203aa399-08cf-4bd0-a44a-0a01debc5662", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478", Pod:"goldmane-666569f655-fw5bw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0f482d7467", MAC:"8a:9c:17:46:58:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.197641 containerd[1450]: 2026-01-24 00:57:09.193 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478" Namespace="calico-system" Pod="goldmane-666569f655-fw5bw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:09.215654 containerd[1450]: time="2026-01-24T00:57:09.215554256Z" level=info msg="StartContainer for \"1c09edc30d9e8caf3de89efa23813ec6b9b20c2634f483da4767bc1b84b8ba7b\" returns successfully" Jan 24 00:57:09.228370 containerd[1450]: time="2026-01-24T00:57:09.228292215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t469r,Uid:34330cde-9cb8-45f6-8598-34068565d43c,Namespace:calico-system,Attempt:1,} returns sandbox id \"61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c\"" Jan 24 00:57:09.230966 containerd[1450]: time="2026-01-24T00:57:09.230944191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:09.244525 containerd[1450]: time="2026-01-24T00:57:09.244286332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:09.246310 containerd[1450]: time="2026-01-24T00:57:09.246189446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:09.246310 containerd[1450]: time="2026-01-24T00:57:09.246218390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.246310 containerd[1450]: time="2026-01-24T00:57:09.246302036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.264413 systemd-networkd[1372]: cali9ed4fe41ca2: Link UP Jan 24 00:57:09.267591 systemd-networkd[1372]: cali9ed4fe41ca2: Gained carrier Jan 24 00:57:09.276959 systemd[1]: Started cri-containerd-1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478.scope - libcontainer container 1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478. Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:08.876 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0 calico-apiserver-7599cd6db5- calico-apiserver daf15e6c-e319-4b6a-b81a-cb796e8f2eb5 984 0 2026-01-24 00:56:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7599cd6db5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7599cd6db5-l6tp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ed4fe41ca2 [] [] }} ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:08.876 [INFO][4472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:08.935 [INFO][4523] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" HandleID="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:08.935 [INFO][4523] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" HandleID="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7599cd6db5-l6tp2", "timestamp":"2026-01-24 00:57:08.935391916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:08.935 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.162 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.163 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.204 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.215 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.225 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.228 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.233 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.233 [INFO][4523] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.237 [INFO][4523] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.247 [INFO][4523] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.257 [INFO][4523] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.257 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" host="localhost" Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.257 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:09.282208 containerd[1450]: 2026-01-24 00:57:09.257 [INFO][4523] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" HandleID="k8s-pod-network.3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.283421 containerd[1450]: 2026-01-24 00:57:09.261 [INFO][4472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0", GenerateName:"calico-apiserver-7599cd6db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7599cd6db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7599cd6db5-l6tp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ed4fe41ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.283421 containerd[1450]: 2026-01-24 00:57:09.262 [INFO][4472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.283421 containerd[1450]: 2026-01-24 00:57:09.262 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ed4fe41ca2 ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.283421 containerd[1450]: 2026-01-24 00:57:09.266 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.283421 containerd[1450]: 2026-01-24 00:57:09.266 [INFO][4472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0", GenerateName:"calico-apiserver-7599cd6db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7599cd6db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db", Pod:"calico-apiserver-7599cd6db5-l6tp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ed4fe41ca2", MAC:"1e:eb:bc:68:e4:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.283421 containerd[1450]: 2026-01-24 00:57:09.279 [INFO][4472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db" Namespace="calico-apiserver" Pod="calico-apiserver-7599cd6db5-l6tp2" WorkloadEndpoint="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:09.291561 containerd[1450]: time="2026-01-24T00:57:09.291501902Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:09.292708 containerd[1450]: time="2026-01-24T00:57:09.292630072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:09.293365 containerd[1450]: time="2026-01-24T00:57:09.292655171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:57:09.293392 kubelet[2488]: E0124 00:57:09.292898 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:09.293392 kubelet[2488]: E0124 00:57:09.292942 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:09.293392 kubelet[2488]: E0124 00:57:09.293052 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chmgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t469r_calico-system(34330cde-9cb8-45f6-8598-34068565d43c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:09.296615 containerd[1450]: time="2026-01-24T00:57:09.296084378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:09.312692 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:09.316112 containerd[1450]: time="2026-01-24T00:57:09.315777826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:09.316112 containerd[1450]: time="2026-01-24T00:57:09.315826257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:09.316112 containerd[1450]: time="2026-01-24T00:57:09.315878725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.316112 containerd[1450]: time="2026-01-24T00:57:09.315957831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.342726 systemd[1]: Started cri-containerd-3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db.scope - libcontainer container 3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db. Jan 24 00:57:09.361859 containerd[1450]: time="2026-01-24T00:57:09.361759505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fw5bw,Uid:203aa399-08cf-4bd0-a44a-0a01debc5662,Namespace:calico-system,Attempt:1,} returns sandbox id \"1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478\"" Jan 24 00:57:09.365958 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:09.369501 containerd[1450]: time="2026-01-24T00:57:09.368825219Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:09.370571 containerd[1450]: time="2026-01-24T00:57:09.370506432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:09.370758 containerd[1450]: time="2026-01-24T00:57:09.370684062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:57:09.371345 kubelet[2488]: E0124 00:57:09.371232 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:09.371403 kubelet[2488]: E0124 00:57:09.371347 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:09.371664 kubelet[2488]: E0124 00:57:09.371572 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chmgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t469r_calico-system(34330cde-9cb8-45f6-8598-34068565d43c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:09.372267 containerd[1450]: time="2026-01-24T00:57:09.372218561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:09.373227 kubelet[2488]: E0124 00:57:09.372951 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:57:09.410938 containerd[1450]: time="2026-01-24T00:57:09.410808871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7599cd6db5-l6tp2,Uid:daf15e6c-e319-4b6a-b81a-cb796e8f2eb5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db\"" Jan 24 00:57:09.444259 containerd[1450]: time="2026-01-24T00:57:09.444217838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:09.446287 containerd[1450]: time="2026-01-24T00:57:09.446043032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:09.446654 containerd[1450]: time="2026-01-24T00:57:09.446079590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:09.446808 kubelet[2488]: E0124 00:57:09.446740 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:09.446808 kubelet[2488]: E0124 00:57:09.446792 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:09.447065 kubelet[2488]: E0124 00:57:09.447006 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8s6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw5bw_calico-system(203aa399-08cf-4bd0-a44a-0a01debc5662): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:09.447493 containerd[1450]: time="2026-01-24T00:57:09.447409554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:09.448174 kubelet[2488]: E0124 00:57:09.448135 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:57:09.507296 containerd[1450]: time="2026-01-24T00:57:09.507215023Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:09.508693 containerd[1450]: time="2026-01-24T00:57:09.508619158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:09.508769 containerd[1450]: time="2026-01-24T00:57:09.508726978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:09.509263 kubelet[2488]: E0124 00:57:09.509180 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:09.509263 kubelet[2488]: E0124 00:57:09.509259 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:09.509525 kubelet[2488]: E0124 00:57:09.509388 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7599cd6db5-l6tp2_calico-apiserver(daf15e6c-e319-4b6a-b81a-cb796e8f2eb5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:09.510818 kubelet[2488]: E0124 00:57:09.510725 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:57:09.585748 containerd[1450]: time="2026-01-24T00:57:09.585558328Z" level=info msg="StopPodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\"" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.643 [INFO][4797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.643 [INFO][4797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" iface="eth0" netns="/var/run/netns/cni-73849cb4-6514-d7d3-58a3-d756ab313a8c" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.644 [INFO][4797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" iface="eth0" netns="/var/run/netns/cni-73849cb4-6514-d7d3-58a3-d756ab313a8c" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.644 [INFO][4797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" iface="eth0" netns="/var/run/netns/cni-73849cb4-6514-d7d3-58a3-d756ab313a8c" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.644 [INFO][4797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.644 [INFO][4797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.676 [INFO][4805] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.679 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.680 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.694 [WARNING][4805] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.694 [INFO][4805] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.698 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:09.707593 containerd[1450]: 2026-01-24 00:57:09.701 [INFO][4797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:09.709078 containerd[1450]: time="2026-01-24T00:57:09.708281375Z" level=info msg="TearDown network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" successfully" Jan 24 00:57:09.709078 containerd[1450]: time="2026-01-24T00:57:09.708317273Z" level=info msg="StopPodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" returns successfully" Jan 24 00:57:09.711154 containerd[1450]: time="2026-01-24T00:57:09.709810003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b646d8bfb-nxb7c,Uid:0364261e-0b7f-4a7d-aec9-83adc08c04f8,Namespace:calico-system,Attempt:1,}" Jan 24 00:57:09.742248 systemd[1]: run-netns-cni\x2d73849cb4\x2d6514\x2dd7d3\x2d58a3\x2dd756ab313a8c.mount: Deactivated successfully. Jan 24 00:57:09.855173 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:56064.service - OpenSSH per-connection server daemon (10.0.0.1:56064). Jan 24 00:57:09.866175 kubelet[2488]: E0124 00:57:09.866122 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:57:09.869336 kubelet[2488]: E0124 00:57:09.869266 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:09.879666 systemd-networkd[1372]: cali0ae9edc7dc5: Link UP Jan 24 00:57:09.882718 kubelet[2488]: E0124 00:57:09.882543 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:57:09.883232 systemd-networkd[1372]: cali0ae9edc7dc5: Gained carrier Jan 24 00:57:09.893401 kubelet[2488]: E0124 00:57:09.893368 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:57:09.894001 kubelet[2488]: E0124 00:57:09.893671 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.775 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0 calico-kube-controllers-b646d8bfb- calico-system 0364261e-0b7f-4a7d-aec9-83adc08c04f8 1030 0 2026-01-24 00:56:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b646d8bfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b646d8bfb-nxb7c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0ae9edc7dc5 [] [] }} ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.775 [INFO][4818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.810 [INFO][4831] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" HandleID="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.810 [INFO][4831] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" HandleID="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139d90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b646d8bfb-nxb7c", "timestamp":"2026-01-24 00:57:09.81022406 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.810 [INFO][4831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.810 [INFO][4831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.810 [INFO][4831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.819 [INFO][4831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.829 [INFO][4831] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.835 [INFO][4831] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.838 [INFO][4831] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.841 [INFO][4831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.841 [INFO][4831] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.844 [INFO][4831] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352 Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.850 [INFO][4831] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.865 [INFO][4831] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.865 [INFO][4831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" host="localhost" Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.868 [INFO][4831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:09.922734 containerd[1450]: 2026-01-24 00:57:09.868 [INFO][4831] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" HandleID="k8s-pod-network.8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.924522 containerd[1450]: 2026-01-24 00:57:09.874 [INFO][4818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0", GenerateName:"calico-kube-controllers-b646d8bfb-", Namespace:"calico-system", SelfLink:"", UID:"0364261e-0b7f-4a7d-aec9-83adc08c04f8", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b646d8bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b646d8bfb-nxb7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ae9edc7dc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.924522 containerd[1450]: 2026-01-24 00:57:09.874 [INFO][4818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.924522 containerd[1450]: 2026-01-24 00:57:09.874 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ae9edc7dc5 ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.924522 containerd[1450]: 2026-01-24 00:57:09.882 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.924522 containerd[1450]: 2026-01-24 00:57:09.884 [INFO][4818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0", GenerateName:"calico-kube-controllers-b646d8bfb-", Namespace:"calico-system", SelfLink:"", UID:"0364261e-0b7f-4a7d-aec9-83adc08c04f8", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b646d8bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352", Pod:"calico-kube-controllers-b646d8bfb-nxb7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ae9edc7dc5", MAC:"4e:e0:e1:36:36:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:09.924522 containerd[1450]: 2026-01-24 00:57:09.906 [INFO][4818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352" Namespace="calico-system" Pod="calico-kube-controllers-b646d8bfb-nxb7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:09.930314 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 56064 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:09.934816 sshd[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:09.945700 systemd-logind[1429]: New session 8 of user core. Jan 24 00:57:09.948633 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:57:09.957574 kubelet[2488]: I0124 00:57:09.957385 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2tdmj" podStartSLOduration=33.957364182 podStartE2EDuration="33.957364182s" podCreationTimestamp="2026-01-24 00:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:09.91588805 +0000 UTC m=+38.426583442" watchObservedRunningTime="2026-01-24 00:57:09.957364182 +0000 UTC m=+38.468059573" Jan 24 00:57:09.990310 containerd[1450]: time="2026-01-24T00:57:09.990050451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:09.990310 containerd[1450]: time="2026-01-24T00:57:09.990107707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:09.991405 containerd[1450]: time="2026-01-24T00:57:09.990558197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:09.991405 containerd[1450]: time="2026-01-24T00:57:09.990716703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:10.049635 systemd[1]: Started cri-containerd-8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352.scope - libcontainer container 8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352. Jan 24 00:57:10.064156 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:10.093656 containerd[1450]: time="2026-01-24T00:57:10.092782886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b646d8bfb-nxb7c,Uid:0364261e-0b7f-4a7d-aec9-83adc08c04f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352\"" Jan 24 00:57:10.095154 containerd[1450]: time="2026-01-24T00:57:10.095103960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:10.123342 sshd[4840]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:10.127017 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:56064.service: Deactivated successfully. Jan 24 00:57:10.128992 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:57:10.131013 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:57:10.132707 systemd-logind[1429]: Removed session 8. Jan 24 00:57:10.158303 containerd[1450]: time="2026-01-24T00:57:10.158240448Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:10.160281 containerd[1450]: time="2026-01-24T00:57:10.160166036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:10.160281 containerd[1450]: time="2026-01-24T00:57:10.160262187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:10.165653 kubelet[2488]: E0124 00:57:10.165577 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:10.166102 kubelet[2488]: E0124 00:57:10.165677 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:10.166102 kubelet[2488]: E0124 00:57:10.165794 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cprhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b646d8bfb-nxb7c_calico-system(0364261e-0b7f-4a7d-aec9-83adc08c04f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:10.167152 kubelet[2488]: E0124 00:57:10.167104 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:57:10.434670 systemd-networkd[1372]: cali01c461dd15a: Gained IPv6LL Jan 24 00:57:10.498770 systemd-networkd[1372]: cali143bfb665be: Gained IPv6LL Jan 24 00:57:10.586155 containerd[1450]: time="2026-01-24T00:57:10.586031323Z" level=info msg="StopPodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\"" Jan 24 00:57:10.586155 containerd[1450]: time="2026-01-24T00:57:10.586069388Z" level=info msg="StopPodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\"" Jan 24 00:57:10.626690 systemd-networkd[1372]: cali9ed4fe41ca2: Gained IPv6LL Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.646 [INFO][4926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4926] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" iface="eth0" netns="/var/run/netns/cni-18b0c22e-b0e2-2268-ede5-0cbd89eee421" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4926] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" iface="eth0" netns="/var/run/netns/cni-18b0c22e-b0e2-2268-ede5-0cbd89eee421" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.648 [INFO][4926] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" iface="eth0" netns="/var/run/netns/cni-18b0c22e-b0e2-2268-ede5-0cbd89eee421" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.648 [INFO][4926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.648 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.676 [INFO][4948] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.676 [INFO][4948] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.676 [INFO][4948] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.683 [WARNING][4948] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.683 [INFO][4948] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.685 [INFO][4948] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:10.693917 containerd[1450]: 2026-01-24 00:57:10.691 [INFO][4926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:10.695622 containerd[1450]: time="2026-01-24T00:57:10.695584781Z" level=info msg="TearDown network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" successfully" Jan 24 00:57:10.695622 containerd[1450]: time="2026-01-24T00:57:10.695620047Z" level=info msg="StopPodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" returns successfully" Jan 24 00:57:10.696314 containerd[1450]: time="2026-01-24T00:57:10.696285046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-kh5pp,Uid:7fe9f3ea-2686-424b-8279-86ca8e141669,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:57:10.698379 systemd[1]: run-netns-cni\x2d18b0c22e\x2db0e2\x2d2268\x2dede5\x2d0cbd89eee421.mount: Deactivated successfully. Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" iface="eth0" netns="/var/run/netns/cni-165c395f-39fd-3bc2-2f06-10322acabf6e" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" iface="eth0" netns="/var/run/netns/cni-165c395f-39fd-3bc2-2f06-10322acabf6e" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" iface="eth0" netns="/var/run/netns/cni-165c395f-39fd-3bc2-2f06-10322acabf6e" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.647 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.681 [INFO][4946] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.681 [INFO][4946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.685 [INFO][4946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.698 [WARNING][4946] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.698 [INFO][4946] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.699 [INFO][4946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:10.705686 containerd[1450]: 2026-01-24 00:57:10.703 [INFO][4936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:10.706237 containerd[1450]: time="2026-01-24T00:57:10.705838639Z" level=info msg="TearDown network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" successfully" Jan 24 00:57:10.706237 containerd[1450]: time="2026-01-24T00:57:10.705882141Z" level=info msg="StopPodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" returns successfully" Jan 24 00:57:10.706310 kubelet[2488]: E0124 00:57:10.706220 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:10.707150 containerd[1450]: time="2026-01-24T00:57:10.706703185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqktf,Uid:578c81f9-e877-4bb0-855e-4f7e7d4c1973,Namespace:kube-system,Attempt:1,}" Jan 24 00:57:10.709836 systemd[1]: run-netns-cni\x2d165c395f\x2d39fd\x2d3bc2\x2d2f06\x2d10322acabf6e.mount: Deactivated successfully. Jan 24 00:57:10.838887 systemd-networkd[1372]: cali49ad0184e13: Link UP Jan 24 00:57:10.841715 systemd-networkd[1372]: cali49ad0184e13: Gained carrier Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.764 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xqktf-eth0 coredns-674b8bbfcf- kube-system 578c81f9-e877-4bb0-855e-4f7e7d4c1973 1090 0 2026-01-24 00:56:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xqktf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49ad0184e13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.764 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.791 [INFO][4988] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" HandleID="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.792 [INFO][4988] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" HandleID="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xqktf", "timestamp":"2026-01-24 00:57:10.791945854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.792 [INFO][4988] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.792 [INFO][4988] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.792 [INFO][4988] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.800 [INFO][4988] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.806 [INFO][4988] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.812 [INFO][4988] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.815 [INFO][4988] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.818 [INFO][4988] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.818 [INFO][4988] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.820 [INFO][4988] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1 Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.824 [INFO][4988] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.831 [INFO][4988] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.831 [INFO][4988] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" host="localhost" Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.831 [INFO][4988] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:10.866166 containerd[1450]: 2026-01-24 00:57:10.831 [INFO][4988] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" HandleID="k8s-pod-network.dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.867055 containerd[1450]: 2026-01-24 00:57:10.834 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xqktf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"578c81f9-e877-4bb0-855e-4f7e7d4c1973", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xqktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49ad0184e13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:10.867055 containerd[1450]: 2026-01-24 00:57:10.834 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.867055 containerd[1450]: 2026-01-24 00:57:10.834 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49ad0184e13 ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.867055 containerd[1450]: 2026-01-24 00:57:10.845 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.867055 containerd[1450]: 2026-01-24 00:57:10.848 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xqktf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"578c81f9-e877-4bb0-855e-4f7e7d4c1973", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1", Pod:"coredns-674b8bbfcf-xqktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49ad0184e13", MAC:"7a:f6:48:7b:aa:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:10.867055 containerd[1450]: 2026-01-24 00:57:10.862 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xqktf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:10.890205 containerd[1450]: time="2026-01-24T00:57:10.889660813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:10.890205 containerd[1450]: time="2026-01-24T00:57:10.889707761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:10.890205 containerd[1450]: time="2026-01-24T00:57:10.889820502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:10.890205 containerd[1450]: time="2026-01-24T00:57:10.890103658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:10.897345 kubelet[2488]: E0124 00:57:10.897312 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:10.898654 kubelet[2488]: E0124 00:57:10.898615 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:57:10.899078 kubelet[2488]: E0124 00:57:10.899029 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:57:10.900560 kubelet[2488]: E0124 00:57:10.900501 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:57:10.902350 kubelet[2488]: E0124 00:57:10.902305 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:57:10.923400 systemd[1]: run-containerd-runc-k8s.io-dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1-runc.wV9Nok.mount: Deactivated successfully. Jan 24 00:57:10.932618 systemd[1]: Started cri-containerd-dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1.scope - libcontainer container dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1. Jan 24 00:57:10.964580 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:10.979008 systemd-networkd[1372]: cali4c68bc67bbb: Link UP Jan 24 00:57:10.980157 systemd-networkd[1372]: cali4c68bc67bbb: Gained carrier Jan 24 00:57:11.010651 systemd-networkd[1372]: calie0f482d7467: Gained IPv6LL Jan 24 00:57:11.011780 containerd[1450]: time="2026-01-24T00:57:11.011421695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xqktf,Uid:578c81f9-e877-4bb0-855e-4f7e7d4c1973,Namespace:kube-system,Attempt:1,} returns sandbox id \"dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1\"" Jan 24 00:57:11.013339 kubelet[2488]: E0124 00:57:11.013284 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:11.017960 containerd[1450]: time="2026-01-24T00:57:11.017932194Z" level=info msg="CreateContainer within sandbox \"dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.768 [INFO][4970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0 calico-apiserver-5df9b4f89d- calico-apiserver 7fe9f3ea-2686-424b-8279-86ca8e141669 1089 0 2026-01-24 00:56:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df9b4f89d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5df9b4f89d-kh5pp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c68bc67bbb [] [] }} ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.768 [INFO][4970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.796 [INFO][4995] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" HandleID="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.796 [INFO][4995] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" HandleID="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5df9b4f89d-kh5pp", "timestamp":"2026-01-24 00:57:10.795995463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.796 [INFO][4995] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.831 [INFO][4995] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.831 [INFO][4995] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.904 [INFO][4995] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.917 [INFO][4995] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.929 [INFO][4995] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.939 [INFO][4995] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.945 [INFO][4995] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.945 [INFO][4995] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.948 [INFO][4995] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.955 [INFO][4995] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.968 [INFO][4995] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.969 [INFO][4995] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" host="localhost" Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.969 [INFO][4995] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:11.025844 containerd[1450]: 2026-01-24 00:57:10.969 [INFO][4995] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" HandleID="k8s-pod-network.0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.026714 containerd[1450]: 2026-01-24 00:57:10.974 [INFO][4970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fe9f3ea-2686-424b-8279-86ca8e141669", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5df9b4f89d-kh5pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c68bc67bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:11.026714 containerd[1450]: 2026-01-24 00:57:10.974 [INFO][4970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.026714 containerd[1450]: 2026-01-24 00:57:10.974 [INFO][4970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c68bc67bbb ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.026714 containerd[1450]: 2026-01-24 00:57:10.979 [INFO][4970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.026714 containerd[1450]: 2026-01-24 00:57:10.982 [INFO][4970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fe9f3ea-2686-424b-8279-86ca8e141669", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a", Pod:"calico-apiserver-5df9b4f89d-kh5pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c68bc67bbb", MAC:"e2:bc:b5:0b:b7:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:11.026714 containerd[1450]: 2026-01-24 00:57:11.009 [INFO][4970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a" Namespace="calico-apiserver" Pod="calico-apiserver-5df9b4f89d-kh5pp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:11.042875 containerd[1450]: time="2026-01-24T00:57:11.041078186Z" level=info msg="CreateContainer within sandbox \"dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3f220f5873167d2964b9ecffcb6fc99f4be286602dd8324c2176dd2e1f8d776\"" Jan 24 00:57:11.042875 containerd[1450]: time="2026-01-24T00:57:11.041918497Z" level=info msg="StartContainer for \"a3f220f5873167d2964b9ecffcb6fc99f4be286602dd8324c2176dd2e1f8d776\"" Jan 24 00:57:11.059680 containerd[1450]: time="2026-01-24T00:57:11.059532432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:11.059680 containerd[1450]: time="2026-01-24T00:57:11.059589999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:11.059680 containerd[1450]: time="2026-01-24T00:57:11.059600499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:11.059823 containerd[1450]: time="2026-01-24T00:57:11.059674075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:11.079607 systemd[1]: Started cri-containerd-a3f220f5873167d2964b9ecffcb6fc99f4be286602dd8324c2176dd2e1f8d776.scope - libcontainer container a3f220f5873167d2964b9ecffcb6fc99f4be286602dd8324c2176dd2e1f8d776. Jan 24 00:57:11.082695 systemd[1]: Started cri-containerd-0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a.scope - libcontainer container 0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a. Jan 24 00:57:11.099702 systemd-resolved[1374]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:57:11.113060 containerd[1450]: time="2026-01-24T00:57:11.112950335Z" level=info msg="StartContainer for \"a3f220f5873167d2964b9ecffcb6fc99f4be286602dd8324c2176dd2e1f8d776\" returns successfully" Jan 24 00:57:11.128494 containerd[1450]: time="2026-01-24T00:57:11.128391618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df9b4f89d-kh5pp,Uid:7fe9f3ea-2686-424b-8279-86ca8e141669,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a\"" Jan 24 00:57:11.130499 containerd[1450]: time="2026-01-24T00:57:11.130204235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:11.190930 containerd[1450]: time="2026-01-24T00:57:11.190827347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:11.192152 containerd[1450]: time="2026-01-24T00:57:11.192097184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:11.192297 containerd[1450]: time="2026-01-24T00:57:11.192216233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:11.192488 kubelet[2488]: E0124 00:57:11.192390 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:11.193039 kubelet[2488]: E0124 00:57:11.192511 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:11.193039 kubelet[2488]: E0124 00:57:11.192644 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmz9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df9b4f89d-kh5pp_calico-apiserver(7fe9f3ea-2686-424b-8279-86ca8e141669): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:11.194761 kubelet[2488]: E0124 00:57:11.194701 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:57:11.714683 systemd-networkd[1372]: cali0ae9edc7dc5: Gained IPv6LL Jan 24 00:57:11.899694 kubelet[2488]: E0124 00:57:11.899654 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:11.901724 kubelet[2488]: E0124 00:57:11.900983 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:11.901724 kubelet[2488]: E0124 00:57:11.901268 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:57:11.901724 kubelet[2488]: E0124 00:57:11.901390 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:57:11.916483 kubelet[2488]: I0124 00:57:11.915304 2488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xqktf" podStartSLOduration=35.915289135 podStartE2EDuration="35.915289135s" podCreationTimestamp="2026-01-24 00:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:57:11.914750873 +0000 UTC m=+40.425446275" watchObservedRunningTime="2026-01-24 00:57:11.915289135 +0000 UTC m=+40.425984527" Jan 24 00:57:12.162785 systemd-networkd[1372]: cali49ad0184e13: Gained IPv6LL Jan 24 00:57:12.904715 kubelet[2488]: E0124 00:57:12.904594 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:57:12.905404 kubelet[2488]: E0124 00:57:12.904852 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:12.931732 systemd-networkd[1372]: cali4c68bc67bbb: Gained IPv6LL Jan 24 00:57:13.904714 kubelet[2488]: E0124 00:57:13.904611 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:14.587480 containerd[1450]: time="2026-01-24T00:57:14.586663580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:14.647138 containerd[1450]: time="2026-01-24T00:57:14.647083442Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:14.648472 containerd[1450]: time="2026-01-24T00:57:14.648395577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:14.648545 containerd[1450]: time="2026-01-24T00:57:14.648511464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:57:14.648799 kubelet[2488]: E0124 00:57:14.648725 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:14.648799 kubelet[2488]: E0124 00:57:14.648782 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:14.649189 kubelet[2488]: E0124 00:57:14.648934 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:94fe4a6d3bbc42d590b86714a22fd0ec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d4995d8c5-4f2ww_calico-system(05590686-f70c-407a-ace8-b12a72f3a4b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:14.650944 containerd[1450]: time="2026-01-24T00:57:14.650889526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:14.707298 containerd[1450]: time="2026-01-24T00:57:14.707178589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:14.708524 containerd[1450]: time="2026-01-24T00:57:14.708406758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:14.708652 containerd[1450]: time="2026-01-24T00:57:14.708520510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:14.708817 kubelet[2488]: E0124 00:57:14.708763 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:14.708895 kubelet[2488]: E0124 00:57:14.708822 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:14.709071 kubelet[2488]: E0124 00:57:14.708980 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d4995d8c5-4f2ww_calico-system(05590686-f70c-407a-ace8-b12a72f3a4b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:14.710303 kubelet[2488]: E0124 00:57:14.710250 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d4995d8c5-4f2ww" podUID="05590686-f70c-407a-ace8-b12a72f3a4b1" Jan 24 00:57:15.138655 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:53546.service - OpenSSH per-connection server daemon (10.0.0.1:53546). Jan 24 00:57:15.184221 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 53546 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:15.186061 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:15.190904 systemd-logind[1429]: New session 9 of user core. Jan 24 00:57:15.199613 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:57:15.350939 sshd[5157]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:15.357656 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:53546.service: Deactivated successfully. Jan 24 00:57:15.361641 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:57:15.364756 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:57:15.366106 systemd-logind[1429]: Removed session 9. Jan 24 00:57:20.372949 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:53552.service - OpenSSH per-connection server daemon (10.0.0.1:53552). Jan 24 00:57:20.419976 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 53552 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:20.421751 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:20.426834 systemd-logind[1429]: New session 10 of user core. Jan 24 00:57:20.438622 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:57:20.580316 sshd[5174]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:20.585779 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:53552.service: Deactivated successfully. Jan 24 00:57:20.588061 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:57:20.589147 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:57:20.590606 systemd-logind[1429]: Removed session 10. Jan 24 00:57:21.589519 containerd[1450]: time="2026-01-24T00:57:21.589372838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:21.672400 containerd[1450]: time="2026-01-24T00:57:21.671683001Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:21.675978 containerd[1450]: time="2026-01-24T00:57:21.675803895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:21.675978 containerd[1450]: time="2026-01-24T00:57:21.675864994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:21.676260 kubelet[2488]: E0124 00:57:21.676188 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:21.676260 kubelet[2488]: E0124 00:57:21.676235 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:21.676881 kubelet[2488]: E0124 00:57:21.676365 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2k96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df9b4f89d-2dhf7_calico-apiserver(e706493e-7f12-4ad3-8c2a-5a508961b9f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:21.677781 kubelet[2488]: E0124 00:57:21.677720 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:57:22.585844 containerd[1450]: time="2026-01-24T00:57:22.585808052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:22.655139 containerd[1450]: time="2026-01-24T00:57:22.655006958Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:22.656987 containerd[1450]: time="2026-01-24T00:57:22.656802031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:22.657067 containerd[1450]: time="2026-01-24T00:57:22.656971601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:22.657241 kubelet[2488]: E0124 00:57:22.657147 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:22.657241 kubelet[2488]: E0124 00:57:22.657220 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:22.657509 kubelet[2488]: E0124 00:57:22.657392 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7599cd6db5-l6tp2_calico-apiserver(daf15e6c-e319-4b6a-b81a-cb796e8f2eb5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:22.659890 kubelet[2488]: E0124 00:57:22.659781 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:57:23.586418 containerd[1450]: time="2026-01-24T00:57:23.586341325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:23.654690 containerd[1450]: time="2026-01-24T00:57:23.654577006Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:23.656607 containerd[1450]: time="2026-01-24T00:57:23.656395950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:23.656607 containerd[1450]: time="2026-01-24T00:57:23.656533099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:23.657166 kubelet[2488]: E0124 00:57:23.656806 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:23.657166 kubelet[2488]: E0124 00:57:23.656857 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:23.657166 kubelet[2488]: E0124 00:57:23.657014 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8s6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw5bw_calico-system(203aa399-08cf-4bd0-a44a-0a01debc5662): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:23.658391 kubelet[2488]: E0124 00:57:23.658325 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:57:24.586251 containerd[1450]: time="2026-01-24T00:57:24.586182705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:24.679679 containerd[1450]: time="2026-01-24T00:57:24.679535791Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:24.681170 containerd[1450]: time="2026-01-24T00:57:24.681113273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:24.681258 containerd[1450]: time="2026-01-24T00:57:24.681220033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:57:24.681642 kubelet[2488]: E0124 00:57:24.681517 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:24.682112 kubelet[2488]: E0124 00:57:24.681644 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:24.682112 kubelet[2488]: E0124 00:57:24.682008 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chmgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t469r_calico-system(34330cde-9cb8-45f6-8598-34068565d43c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:24.682320 containerd[1450]: time="2026-01-24T00:57:24.682196994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:24.788235 containerd[1450]: time="2026-01-24T00:57:24.788141466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:24.790018 containerd[1450]: time="2026-01-24T00:57:24.789877423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:24.790138 containerd[1450]: time="2026-01-24T00:57:24.789926713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:24.790349 kubelet[2488]: E0124 00:57:24.790286 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:24.790349 kubelet[2488]: E0124 00:57:24.790356 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:24.790778 kubelet[2488]: E0124 00:57:24.790688 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmz9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df9b4f89d-kh5pp_calico-apiserver(7fe9f3ea-2686-424b-8279-86ca8e141669): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:24.791500 containerd[1450]: time="2026-01-24T00:57:24.791383470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:24.793498 kubelet[2488]: E0124 00:57:24.793403 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:57:24.860310 containerd[1450]: time="2026-01-24T00:57:24.860102645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:24.862097 containerd[1450]: time="2026-01-24T00:57:24.861907979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:24.862097 containerd[1450]: time="2026-01-24T00:57:24.862023207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:57:24.862391 kubelet[2488]: E0124 00:57:24.862261 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:24.862391 kubelet[2488]: E0124 00:57:24.862337 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:24.862702 kubelet[2488]: E0124 00:57:24.862599 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chmgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t469r_calico-system(34330cde-9cb8-45f6-8598-34068565d43c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:24.864068 kubelet[2488]: E0124 00:57:24.863970 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:57:25.587531 containerd[1450]: time="2026-01-24T00:57:25.587115123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:57:25.589018 kubelet[2488]: E0124 00:57:25.588859 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d4995d8c5-4f2ww" podUID="05590686-f70c-407a-ace8-b12a72f3a4b1" Jan 24 00:57:25.603203 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:41634.service - OpenSSH per-connection server daemon (10.0.0.1:41634). Jan 24 00:57:25.647239 containerd[1450]: time="2026-01-24T00:57:25.647154534Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:25.648828 containerd[1450]: time="2026-01-24T00:57:25.648715392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:57:25.648828 containerd[1450]: time="2026-01-24T00:57:25.648755565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:25.649133 kubelet[2488]: E0124 00:57:25.649052 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:25.649187 kubelet[2488]: E0124 00:57:25.649135 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:57:25.649420 kubelet[2488]: E0124 00:57:25.649304 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cprhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b646d8bfb-nxb7c_calico-system(0364261e-0b7f-4a7d-aec9-83adc08c04f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:25.651510 kubelet[2488]: E0124 00:57:25.650663 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:57:25.651601 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 41634 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:25.654020 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:25.662788 systemd-logind[1429]: New session 11 of user core. Jan 24 00:57:25.668758 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:57:25.813407 sshd[5198]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:25.822640 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:41634.service: Deactivated successfully. Jan 24 00:57:25.824585 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:57:25.825657 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:57:25.835033 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:41646.service - OpenSSH per-connection server daemon (10.0.0.1:41646). Jan 24 00:57:25.836378 systemd-logind[1429]: Removed session 11. Jan 24 00:57:25.878514 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 41646 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:25.880574 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:25.886477 systemd-logind[1429]: New session 12 of user core. Jan 24 00:57:25.900787 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:57:26.089312 sshd[5213]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:26.104233 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:41646.service: Deactivated successfully. Jan 24 00:57:26.106742 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:57:26.109740 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:57:26.122984 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:41656.service - OpenSSH per-connection server daemon (10.0.0.1:41656). Jan 24 00:57:26.126258 systemd-logind[1429]: Removed session 12. Jan 24 00:57:26.158098 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 41656 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:26.160345 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:26.165935 systemd-logind[1429]: New session 13 of user core. Jan 24 00:57:26.176689 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:57:26.315186 sshd[5226]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:26.319903 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:41656.service: Deactivated successfully. Jan 24 00:57:26.322535 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:57:26.323518 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:57:26.325088 systemd-logind[1429]: Removed session 13. Jan 24 00:57:31.330964 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:41666.service - OpenSSH per-connection server daemon (10.0.0.1:41666). Jan 24 00:57:31.373040 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 41666 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:31.375201 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:31.380772 systemd-logind[1429]: New session 14 of user core. Jan 24 00:57:31.397712 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:57:31.536368 sshd[5247]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:31.548135 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:41666.service: Deactivated successfully. Jan 24 00:57:31.551214 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:57:31.554048 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:57:31.560040 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:41680.service - OpenSSH per-connection server daemon (10.0.0.1:41680). Jan 24 00:57:31.561482 systemd-logind[1429]: Removed session 14. Jan 24 00:57:31.567491 containerd[1450]: time="2026-01-24T00:57:31.567125977Z" level=info msg="StopPodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\"" Jan 24 00:57:31.611529 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 41680 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:31.614180 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:31.620617 systemd-logind[1429]: New session 15 of user core. Jan 24 00:57:31.628602 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.617 [WARNING][5273] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9", Pod:"coredns-674b8bbfcf-2tdmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01c461dd15a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.617 [INFO][5273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.617 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" iface="eth0" netns="" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.617 [INFO][5273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.617 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.643 [INFO][5286] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.644 [INFO][5286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.644 [INFO][5286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.649 [WARNING][5286] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.649 [INFO][5286] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.651 [INFO][5286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:31.656786 containerd[1450]: 2026-01-24 00:57:31.654 [INFO][5273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.657353 containerd[1450]: time="2026-01-24T00:57:31.657306255Z" level=info msg="TearDown network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" successfully" Jan 24 00:57:31.657353 containerd[1450]: time="2026-01-24T00:57:31.657351078Z" level=info msg="StopPodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" returns successfully" Jan 24 00:57:31.658166 containerd[1450]: time="2026-01-24T00:57:31.658127284Z" level=info msg="RemovePodSandbox for \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\"" Jan 24 00:57:31.660075 containerd[1450]: time="2026-01-24T00:57:31.660044110Z" level=info msg="Forcibly stopping sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\"" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.701 [WARNING][5304] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cc1bbdfb-ba1e-48a0-8b73-32d52c484b6a", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0117c617c2ee42c174cedae86de2fb0734c2ef57460a9a1da0c20dda9718c8d9", Pod:"coredns-674b8bbfcf-2tdmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01c461dd15a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.702 [INFO][5304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.702 [INFO][5304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" iface="eth0" netns="" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.702 [INFO][5304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.702 [INFO][5304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.737 [INFO][5317] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.738 [INFO][5317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.738 [INFO][5317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.747 [WARNING][5317] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.747 [INFO][5317] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" HandleID="k8s-pod-network.08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Workload="localhost-k8s-coredns--674b8bbfcf--2tdmj-eth0" Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.750 [INFO][5317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:31.757498 containerd[1450]: 2026-01-24 00:57:31.753 [INFO][5304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e" Jan 24 00:57:31.758166 containerd[1450]: time="2026-01-24T00:57:31.757522765Z" level=info msg="TearDown network for sandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" successfully" Jan 24 00:57:31.769354 containerd[1450]: time="2026-01-24T00:57:31.769109862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:31.769354 containerd[1450]: time="2026-01-24T00:57:31.769192556Z" level=info msg="RemovePodSandbox \"08450064292c0a37aaa89f0f577a4bd37715bb0dd96c6b0517fd50be7a66291e\" returns successfully" Jan 24 00:57:31.770125 containerd[1450]: time="2026-01-24T00:57:31.770105204Z" level=info msg="StopPodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\"" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.828 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e706493e-7f12-4ad3-8c2a-5a508961b9f4", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd", Pod:"calico-apiserver-5df9b4f89d-2dhf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8de8bdbef4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.829 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.829 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" iface="eth0" netns="" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.829 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.829 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.857 [INFO][5345] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.857 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.858 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.866 [WARNING][5345] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.866 [INFO][5345] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.868 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:31.876874 containerd[1450]: 2026-01-24 00:57:31.872 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.876874 containerd[1450]: time="2026-01-24T00:57:31.875249398Z" level=info msg="TearDown network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" successfully" Jan 24 00:57:31.876874 containerd[1450]: time="2026-01-24T00:57:31.875281658Z" level=info msg="StopPodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" returns successfully" Jan 24 00:57:31.876874 containerd[1450]: time="2026-01-24T00:57:31.875881792Z" level=info msg="RemovePodSandbox for \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\"" Jan 24 00:57:31.876874 containerd[1450]: time="2026-01-24T00:57:31.875907951Z" level=info msg="Forcibly stopping sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\"" Jan 24 00:57:31.948675 sshd[5262]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:31.960515 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:41680.service: Deactivated successfully. Jan 24 00:57:31.965406 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:57:31.968398 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.922 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e706493e-7f12-4ad3-8c2a-5a508961b9f4", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e48b315befb7e73854d818939c375725567147a34f61c1a4b1924a07729c4dd", Pod:"calico-apiserver-5df9b4f89d-2dhf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8de8bdbef4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.922 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.922 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" iface="eth0" netns="" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.922 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.922 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.960 [INFO][5371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.961 [INFO][5371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.961 [INFO][5371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.969 [WARNING][5371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.969 [INFO][5371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" HandleID="k8s-pod-network.fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--2dhf7-eth0" Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.972 [INFO][5371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:31.979068 containerd[1450]: 2026-01-24 00:57:31.975 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9" Jan 24 00:57:31.979684 containerd[1450]: time="2026-01-24T00:57:31.979107590Z" level=info msg="TearDown network for sandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" successfully" Jan 24 00:57:31.980865 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:41694.service - OpenSSH per-connection server daemon (10.0.0.1:41694). Jan 24 00:57:31.982851 systemd-logind[1429]: Removed session 15. Jan 24 00:57:31.984757 containerd[1450]: time="2026-01-24T00:57:31.984696856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:31.984757 containerd[1450]: time="2026-01-24T00:57:31.984739626Z" level=info msg="RemovePodSandbox \"fdc97f7a4d6c94f23cc337279d3fe0ace62d86b0ec67287eea18b169123fccf9\" returns successfully" Jan 24 00:57:31.985372 containerd[1450]: time="2026-01-24T00:57:31.985309983Z" level=info msg="StopPodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\"" Jan 24 00:57:32.020932 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 41694 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:32.022869 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:32.028114 systemd-logind[1429]: New session 16 of user core. Jan 24 00:57:32.034677 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.027 [WARNING][5392] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xqktf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"578c81f9-e877-4bb0-855e-4f7e7d4c1973", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1", Pod:"coredns-674b8bbfcf-xqktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49ad0184e13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.027 [INFO][5392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.027 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" iface="eth0" netns="" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.027 [INFO][5392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.027 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.053 [INFO][5402] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.054 [INFO][5402] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.054 [INFO][5402] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.062 [WARNING][5402] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.062 [INFO][5402] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.064 [INFO][5402] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.069726 containerd[1450]: 2026-01-24 00:57:32.067 [INFO][5392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.069726 containerd[1450]: time="2026-01-24T00:57:32.069704168Z" level=info msg="TearDown network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" successfully" Jan 24 00:57:32.069726 containerd[1450]: time="2026-01-24T00:57:32.069728764Z" level=info msg="StopPodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" returns successfully" Jan 24 00:57:32.070513 containerd[1450]: time="2026-01-24T00:57:32.070484068Z" level=info msg="RemovePodSandbox for \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\"" Jan 24 00:57:32.070548 containerd[1450]: time="2026-01-24T00:57:32.070517120Z" level=info msg="Forcibly stopping sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\"" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.122 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xqktf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"578c81f9-e877-4bb0-855e-4f7e7d4c1973", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea1062e4033c7f39fb0185f5393b5db25da92c52ac0eb2f232ddd0ba3a54dd1", Pod:"coredns-674b8bbfcf-xqktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49ad0184e13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.122 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.122 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" iface="eth0" netns="" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.122 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.122 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.157 [INFO][5435] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.157 [INFO][5435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.157 [INFO][5435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.165 [WARNING][5435] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.165 [INFO][5435] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" HandleID="k8s-pod-network.384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Workload="localhost-k8s-coredns--674b8bbfcf--xqktf-eth0" Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.168 [INFO][5435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.177211 containerd[1450]: 2026-01-24 00:57:32.173 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b" Jan 24 00:57:32.177211 containerd[1450]: time="2026-01-24T00:57:32.177167743Z" level=info msg="TearDown network for sandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" successfully" Jan 24 00:57:32.181677 containerd[1450]: time="2026-01-24T00:57:32.181595062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:32.181677 containerd[1450]: time="2026-01-24T00:57:32.181673939Z" level=info msg="RemovePodSandbox \"384b9888e416f07a09552362bbf3ee7f51dcba81e6d4190c7510e1a60c7b459b\" returns successfully" Jan 24 00:57:32.182362 containerd[1450]: time="2026-01-24T00:57:32.182307070Z" level=info msg="StopPodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\"" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.228 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0", GenerateName:"calico-kube-controllers-b646d8bfb-", Namespace:"calico-system", SelfLink:"", UID:"0364261e-0b7f-4a7d-aec9-83adc08c04f8", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b646d8bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352", Pod:"calico-kube-controllers-b646d8bfb-nxb7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ae9edc7dc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.228 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.229 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" iface="eth0" netns="" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.229 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.229 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.261 [INFO][5461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.261 [INFO][5461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.261 [INFO][5461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.270 [WARNING][5461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.270 [INFO][5461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.273 [INFO][5461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.279494 containerd[1450]: 2026-01-24 00:57:32.276 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.279494 containerd[1450]: time="2026-01-24T00:57:32.279283293Z" level=info msg="TearDown network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" successfully" Jan 24 00:57:32.279494 containerd[1450]: time="2026-01-24T00:57:32.279318608Z" level=info msg="StopPodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" returns successfully" Jan 24 00:57:32.280244 containerd[1450]: time="2026-01-24T00:57:32.280042572Z" level=info msg="RemovePodSandbox for \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\"" Jan 24 00:57:32.280244 containerd[1450]: time="2026-01-24T00:57:32.280076025Z" level=info msg="Forcibly stopping sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\"" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.341 [WARNING][5482] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0", GenerateName:"calico-kube-controllers-b646d8bfb-", Namespace:"calico-system", SelfLink:"", UID:"0364261e-0b7f-4a7d-aec9-83adc08c04f8", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b646d8bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c74d85433992ec672565a1239b4b6c62378639cea66dd5754537db4dc007352", Pod:"calico-kube-controllers-b646d8bfb-nxb7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ae9edc7dc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.342 [INFO][5482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.342 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" iface="eth0" netns="" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.342 [INFO][5482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.342 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.374 [INFO][5493] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.374 [INFO][5493] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.375 [INFO][5493] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.386 [WARNING][5493] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.386 [INFO][5493] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" HandleID="k8s-pod-network.4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Workload="localhost-k8s-calico--kube--controllers--b646d8bfb--nxb7c-eth0" Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.390 [INFO][5493] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.398175 containerd[1450]: 2026-01-24 00:57:32.393 [INFO][5482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607" Jan 24 00:57:32.399976 containerd[1450]: time="2026-01-24T00:57:32.399092577Z" level=info msg="TearDown network for sandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" successfully" Jan 24 00:57:32.408817 containerd[1450]: time="2026-01-24T00:57:32.408700868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:32.408817 containerd[1450]: time="2026-01-24T00:57:32.408791586Z" level=info msg="RemovePodSandbox \"4642853225d2d1f8941e62878e263da9cf25c1945b2dcd6184a7eb57da4a1607\" returns successfully" Jan 24 00:57:32.409538 containerd[1450]: time="2026-01-24T00:57:32.409492187Z" level=info msg="StopPodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\"" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.476 [WARNING][5510] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t469r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34330cde-9cb8-45f6-8598-34068565d43c", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c", Pod:"csi-node-driver-t469r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143bfb665be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.479 [INFO][5510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.479 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" iface="eth0" netns="" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.479 [INFO][5510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.479 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.517 [INFO][5519] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.518 [INFO][5519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.519 [INFO][5519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.527 [WARNING][5519] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.527 [INFO][5519] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.530 [INFO][5519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.537887 containerd[1450]: 2026-01-24 00:57:32.533 [INFO][5510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.538417 containerd[1450]: time="2026-01-24T00:57:32.538113060Z" level=info msg="TearDown network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" successfully" Jan 24 00:57:32.538417 containerd[1450]: time="2026-01-24T00:57:32.538235038Z" level=info msg="StopPodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" returns successfully" Jan 24 00:57:32.541155 containerd[1450]: time="2026-01-24T00:57:32.540835539Z" level=info msg="RemovePodSandbox for \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\"" Jan 24 00:57:32.541155 containerd[1450]: time="2026-01-24T00:57:32.540866377Z" level=info msg="Forcibly stopping sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\"" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.610 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t469r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34330cde-9cb8-45f6-8598-34068565d43c", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61c17a370e03a4044cfb230a5491de6716f483877b1e8470d847984037df369c", Pod:"csi-node-driver-t469r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali143bfb665be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.613 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.613 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" iface="eth0" netns="" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.613 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.613 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.648 [INFO][5547] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.648 [INFO][5547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.648 [INFO][5547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.658 [WARNING][5547] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.658 [INFO][5547] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" HandleID="k8s-pod-network.f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Workload="localhost-k8s-csi--node--driver--t469r-eth0" Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.663 [INFO][5547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.671385 containerd[1450]: 2026-01-24 00:57:32.666 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b" Jan 24 00:57:32.673549 containerd[1450]: time="2026-01-24T00:57:32.672465699Z" level=info msg="TearDown network for sandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" successfully" Jan 24 00:57:32.679156 containerd[1450]: time="2026-01-24T00:57:32.678983980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:32.679156 containerd[1450]: time="2026-01-24T00:57:32.679149509Z" level=info msg="RemovePodSandbox \"f6f1ffe2eed85679b329e2077286107ef5af46db3ac2373d4baea2da8933f66b\" returns successfully" Jan 24 00:57:32.680980 containerd[1450]: time="2026-01-24T00:57:32.680894708Z" level=info msg="StopPodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\"" Jan 24 00:57:32.754739 sshd[5382]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:32.776790 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:41700.service - OpenSSH per-connection server daemon (10.0.0.1:41700). Jan 24 00:57:32.777615 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:41694.service: Deactivated successfully. Jan 24 00:57:32.783914 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:57:32.789725 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:57:32.801560 systemd-logind[1429]: Removed session 16. Jan 24 00:57:32.830717 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 41700 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:32.831864 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.734 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0", GenerateName:"calico-apiserver-7599cd6db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7599cd6db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db", Pod:"calico-apiserver-7599cd6db5-l6tp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ed4fe41ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.735 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.735 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" iface="eth0" netns="" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.735 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.735 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.806 [INFO][5574] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.807 [INFO][5574] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.807 [INFO][5574] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.821 [WARNING][5574] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.821 [INFO][5574] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.824 [INFO][5574] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.832595 containerd[1450]: 2026-01-24 00:57:32.828 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.833339 containerd[1450]: time="2026-01-24T00:57:32.833282018Z" level=info msg="TearDown network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" successfully" Jan 24 00:57:32.833339 containerd[1450]: time="2026-01-24T00:57:32.833321211Z" level=info msg="StopPodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" returns successfully" Jan 24 00:57:32.834740 containerd[1450]: time="2026-01-24T00:57:32.834693753Z" level=info msg="RemovePodSandbox for \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\"" Jan 24 00:57:32.834740 containerd[1450]: time="2026-01-24T00:57:32.834736863Z" level=info msg="Forcibly stopping sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\"" Jan 24 00:57:32.849220 systemd-logind[1429]: New session 17 of user core. Jan 24 00:57:32.851653 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:57:32.890823 systemd[1]: run-containerd-runc-k8s.io-ec22a60dff34a0639defc3af11bcd8ff257426e20efb9b54238de16f981fc5eb-runc.rMVbIC.mount: Deactivated successfully. Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.909 [WARNING][5604] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0", GenerateName:"calico-apiserver-7599cd6db5-", Namespace:"calico-apiserver", SelfLink:"", UID:"daf15e6c-e319-4b6a-b81a-cb796e8f2eb5", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7599cd6db5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3162dd2f4e02da89cd40be580c301b1c3b8efca8ff2e27f053032f980445a7db", Pod:"calico-apiserver-7599cd6db5-l6tp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ed4fe41ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.912 [INFO][5604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.912 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" iface="eth0" netns="" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.912 [INFO][5604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.912 [INFO][5604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.957 [INFO][5629] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.957 [INFO][5629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.957 [INFO][5629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.969 [WARNING][5629] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.969 [INFO][5629] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" HandleID="k8s-pod-network.d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Workload="localhost-k8s-calico--apiserver--7599cd6db5--l6tp2-eth0" Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.972 [INFO][5629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:32.977882 containerd[1450]: 2026-01-24 00:57:32.975 [INFO][5604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029" Jan 24 00:57:32.978848 containerd[1450]: time="2026-01-24T00:57:32.978402069Z" level=info msg="TearDown network for sandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" successfully" Jan 24 00:57:32.983885 containerd[1450]: time="2026-01-24T00:57:32.983720841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:32.984574 containerd[1450]: time="2026-01-24T00:57:32.984421709Z" level=info msg="RemovePodSandbox \"d8e32564dcd2f5a34046646705ad0c659aa1dc7703a9b15234c68fd183ad1029\" returns successfully" Jan 24 00:57:32.985721 containerd[1450]: time="2026-01-24T00:57:32.985419964Z" level=info msg="StopPodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\"" Jan 24 00:57:32.993937 kubelet[2488]: E0124 00:57:32.993874 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.033 [WARNING][5654] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" WorkloadEndpoint="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.034 [INFO][5654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.034 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" iface="eth0" netns="" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.034 [INFO][5654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.034 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.071 [INFO][5662] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.072 [INFO][5662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.072 [INFO][5662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.081 [WARNING][5662] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.081 [INFO][5662] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.084 [INFO][5662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.091611 containerd[1450]: 2026-01-24 00:57:33.087 [INFO][5654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.093116 containerd[1450]: time="2026-01-24T00:57:33.091415628Z" level=info msg="TearDown network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" successfully" Jan 24 00:57:33.093116 containerd[1450]: time="2026-01-24T00:57:33.092303093Z" level=info msg="StopPodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" returns successfully" Jan 24 00:57:33.094346 containerd[1450]: time="2026-01-24T00:57:33.093679491Z" level=info msg="RemovePodSandbox for \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\"" Jan 24 00:57:33.094346 containerd[1450]: time="2026-01-24T00:57:33.094067415Z" level=info msg="Forcibly stopping sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\"" Jan 24 00:57:33.185842 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:33.196330 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:41700.service: Deactivated successfully. Jan 24 00:57:33.199175 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.138 [WARNING][5680] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" WorkloadEndpoint="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.138 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.138 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" iface="eth0" netns="" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.138 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.138 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.177 [INFO][5689] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.177 [INFO][5689] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.177 [INFO][5689] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.189 [WARNING][5689] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.189 [INFO][5689] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" HandleID="k8s-pod-network.c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Workload="localhost-k8s-whisker--65fd74bb7--xjhrv-eth0" Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.192 [INFO][5689] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.202143 containerd[1450]: 2026-01-24 00:57:33.198 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f" Jan 24 00:57:33.204344 containerd[1450]: time="2026-01-24T00:57:33.202566701Z" level=info msg="TearDown network for sandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" successfully" Jan 24 00:57:33.202746 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:57:33.207219 containerd[1450]: time="2026-01-24T00:57:33.207073671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:33.207219 containerd[1450]: time="2026-01-24T00:57:33.207176573Z" level=info msg="RemovePodSandbox \"c39a442ceedaf11c06d9817589f27aeab4a3373d3da3f524c53206256b2fa24f\" returns successfully" Jan 24 00:57:33.207995 containerd[1450]: time="2026-01-24T00:57:33.207899109Z" level=info msg="StopPodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\"" Jan 24 00:57:33.212125 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:41716.service - OpenSSH per-connection server daemon (10.0.0.1:41716). Jan 24 00:57:33.213907 systemd-logind[1429]: Removed session 17. Jan 24 00:57:33.261856 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 41716 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:33.265332 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:33.272854 systemd-logind[1429]: New session 18 of user core. Jan 24 00:57:33.283625 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.262 [WARNING][5711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fe9f3ea-2686-424b-8279-86ca8e141669", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a", Pod:"calico-apiserver-5df9b4f89d-kh5pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c68bc67bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.263 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.263 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" iface="eth0" netns="" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.263 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.263 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.292 [INFO][5721] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.292 [INFO][5721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.292 [INFO][5721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.298 [WARNING][5721] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.298 [INFO][5721] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.299 [INFO][5721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.304930 containerd[1450]: 2026-01-24 00:57:33.302 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.305737 containerd[1450]: time="2026-01-24T00:57:33.305579523Z" level=info msg="TearDown network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" successfully" Jan 24 00:57:33.305737 containerd[1450]: time="2026-01-24T00:57:33.305616081Z" level=info msg="StopPodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" returns successfully" Jan 24 00:57:33.306312 containerd[1450]: time="2026-01-24T00:57:33.306260423Z" level=info msg="RemovePodSandbox for \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\"" Jan 24 00:57:33.306387 containerd[1450]: time="2026-01-24T00:57:33.306328900Z" level=info msg="Forcibly stopping sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\"" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.344 [WARNING][5740] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0", GenerateName:"calico-apiserver-5df9b4f89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fe9f3ea-2686-424b-8279-86ca8e141669", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df9b4f89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b872c95a08ed4a59c359133cdf9d24309c02887a1d8deaef62a08a873f7313a", Pod:"calico-apiserver-5df9b4f89d-kh5pp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c68bc67bbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.344 [INFO][5740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.344 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" iface="eth0" netns="" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.344 [INFO][5740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.344 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.378 [INFO][5756] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.378 [INFO][5756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.378 [INFO][5756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.388 [WARNING][5756] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.388 [INFO][5756] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" HandleID="k8s-pod-network.23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Workload="localhost-k8s-calico--apiserver--5df9b4f89d--kh5pp-eth0" Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.390 [INFO][5756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.396061 containerd[1450]: 2026-01-24 00:57:33.393 [INFO][5740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba" Jan 24 00:57:33.396693 containerd[1450]: time="2026-01-24T00:57:33.396090196Z" level=info msg="TearDown network for sandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" successfully" Jan 24 00:57:33.400639 containerd[1450]: time="2026-01-24T00:57:33.400519628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:33.400639 containerd[1450]: time="2026-01-24T00:57:33.400602333Z" level=info msg="RemovePodSandbox \"23b075c74b9924b2a16ea14bcb0aee762587000292f1ce7f0a6e08d286ac74ba\" returns successfully" Jan 24 00:57:33.401346 containerd[1450]: time="2026-01-24T00:57:33.401250374Z" level=info msg="StopPodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\"" Jan 24 00:57:33.426845 sshd[5699]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:33.431712 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:41716.service: Deactivated successfully. Jan 24 00:57:33.434722 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:57:33.435823 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:57:33.438291 systemd-logind[1429]: Removed session 18. Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.448 [WARNING][5777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fw5bw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"203aa399-08cf-4bd0-a44a-0a01debc5662", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478", Pod:"goldmane-666569f655-fw5bw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0f482d7467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.448 [INFO][5777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.448 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" iface="eth0" netns="" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.448 [INFO][5777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.448 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.473 [INFO][5787] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.473 [INFO][5787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.473 [INFO][5787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.480 [WARNING][5787] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.480 [INFO][5787] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.482 [INFO][5787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.488865 containerd[1450]: 2026-01-24 00:57:33.485 [INFO][5777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.488865 containerd[1450]: time="2026-01-24T00:57:33.488910750Z" level=info msg="TearDown network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" successfully" Jan 24 00:57:33.489722 containerd[1450]: time="2026-01-24T00:57:33.488948992Z" level=info msg="StopPodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" returns successfully" Jan 24 00:57:33.489787 containerd[1450]: time="2026-01-24T00:57:33.489727384Z" level=info msg="RemovePodSandbox for \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\"" Jan 24 00:57:33.489787 containerd[1450]: time="2026-01-24T00:57:33.489757982Z" level=info msg="Forcibly stopping sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\"" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.532 [WARNING][5806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--fw5bw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"203aa399-08cf-4bd0-a44a-0a01debc5662", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1942f7dcf823c2b708a182f5b52d0d50f836c2781b5007e3ebd64ce63e899478", Pod:"goldmane-666569f655-fw5bw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0f482d7467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.532 [INFO][5806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.532 [INFO][5806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" iface="eth0" netns="" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.532 [INFO][5806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.532 [INFO][5806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.558 [INFO][5815] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.559 [INFO][5815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.559 [INFO][5815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.569 [WARNING][5815] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.569 [INFO][5815] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" HandleID="k8s-pod-network.b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Workload="localhost-k8s-goldmane--666569f655--fw5bw-eth0" Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.571 [INFO][5815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:33.576343 containerd[1450]: 2026-01-24 00:57:33.574 [INFO][5806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd" Jan 24 00:57:33.576746 containerd[1450]: time="2026-01-24T00:57:33.576372487Z" level=info msg="TearDown network for sandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" successfully" Jan 24 00:57:33.581080 containerd[1450]: time="2026-01-24T00:57:33.580966367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:57:33.581131 containerd[1450]: time="2026-01-24T00:57:33.581076051Z" level=info msg="RemovePodSandbox \"b46bb1d620cdecb3fb039dbc67fce533f626dc5703e592403cb1249249bb13bd\" returns successfully" Jan 24 00:57:33.586638 kubelet[2488]: E0124 00:57:33.585850 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:57:33.587256 kubelet[2488]: E0124 00:57:33.587136 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:57:35.586367 kubelet[2488]: E0124 00:57:35.586171 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662" Jan 24 00:57:37.586893 kubelet[2488]: E0124 00:57:37.586666 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:57:37.587696 kubelet[2488]: E0124 00:57:37.587478 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b646d8bfb-nxb7c" podUID="0364261e-0b7f-4a7d-aec9-83adc08c04f8" Jan 24 00:57:37.588924 kubelet[2488]: E0124 00:57:37.588854 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t469r" podUID="34330cde-9cb8-45f6-8598-34068565d43c" Jan 24 00:57:38.439253 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). Jan 24 00:57:38.477618 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:38.479878 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:38.485644 systemd-logind[1429]: New session 19 of user core. Jan 24 00:57:38.490726 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:57:38.618898 sshd[5828]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:38.624335 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:45206.service: Deactivated successfully. Jan 24 00:57:38.627284 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:57:38.628253 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:57:38.629671 systemd-logind[1429]: Removed session 19. Jan 24 00:57:39.586550 containerd[1450]: time="2026-01-24T00:57:39.586485983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:57:39.653192 containerd[1450]: time="2026-01-24T00:57:39.653041403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:39.654448 containerd[1450]: time="2026-01-24T00:57:39.654378637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:57:39.654523 containerd[1450]: time="2026-01-24T00:57:39.654497339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:57:39.654911 kubelet[2488]: E0124 00:57:39.654847 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:39.655528 kubelet[2488]: E0124 00:57:39.654924 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:57:39.655528 kubelet[2488]: E0124 00:57:39.655118 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:94fe4a6d3bbc42d590b86714a22fd0ec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d4995d8c5-4f2ww_calico-system(05590686-f70c-407a-ace8-b12a72f3a4b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:39.657799 containerd[1450]: time="2026-01-24T00:57:39.657549198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:57:39.719666 containerd[1450]: time="2026-01-24T00:57:39.719578929Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:39.721121 containerd[1450]: time="2026-01-24T00:57:39.721044518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:57:39.721210 containerd[1450]: time="2026-01-24T00:57:39.721159343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:57:39.721987 kubelet[2488]: E0124 00:57:39.721371 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:39.721987 kubelet[2488]: E0124 00:57:39.721418 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:57:39.721987 kubelet[2488]: E0124 00:57:39.721624 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fwmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d4995d8c5-4f2ww_calico-system(05590686-f70c-407a-ace8-b12a72f3a4b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:39.722868 kubelet[2488]: E0124 00:57:39.722822 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d4995d8c5-4f2ww" podUID="05590686-f70c-407a-ace8-b12a72f3a4b1" Jan 24 00:57:43.641015 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:45218.service - OpenSSH per-connection server daemon (10.0.0.1:45218). Jan 24 00:57:43.677126 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 45218 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:43.678825 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:43.683345 systemd-logind[1429]: New session 20 of user core. Jan 24 00:57:43.695668 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:57:43.807021 sshd[5851]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:43.810725 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:45218.service: Deactivated successfully. Jan 24 00:57:43.813823 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:57:43.816317 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:57:43.817815 systemd-logind[1429]: Removed session 20. Jan 24 00:57:46.585601 kubelet[2488]: E0124 00:57:46.585504 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:47.584866 kubelet[2488]: E0124 00:57:47.584767 2488 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:57:48.586715 containerd[1450]: time="2026-01-24T00:57:48.586626800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:48.645662 containerd[1450]: time="2026-01-24T00:57:48.645534996Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:48.646903 containerd[1450]: time="2026-01-24T00:57:48.646827277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:48.646976 containerd[1450]: time="2026-01-24T00:57:48.646902318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:48.647097 kubelet[2488]: E0124 00:57:48.647012 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.647097 kubelet[2488]: E0124 00:57:48.647081 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.647493 kubelet[2488]: E0124 00:57:48.647291 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2k96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df9b4f89d-2dhf7_calico-apiserver(e706493e-7f12-4ad3-8c2a-5a508961b9f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:48.647590 containerd[1450]: time="2026-01-24T00:57:48.647542913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:48.649157 kubelet[2488]: E0124 00:57:48.649015 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-2dhf7" podUID="e706493e-7f12-4ad3-8c2a-5a508961b9f4" Jan 24 00:57:48.713591 containerd[1450]: time="2026-01-24T00:57:48.713489715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:48.714976 containerd[1450]: time="2026-01-24T00:57:48.714905165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:48.715020 containerd[1450]: time="2026-01-24T00:57:48.714968645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:48.715248 kubelet[2488]: E0124 00:57:48.715220 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.715314 kubelet[2488]: E0124 00:57:48.715261 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:48.715456 kubelet[2488]: E0124 00:57:48.715383 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjv42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7599cd6db5-l6tp2_calico-apiserver(daf15e6c-e319-4b6a-b81a-cb796e8f2eb5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:48.716684 kubelet[2488]: E0124 00:57:48.716641 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7599cd6db5-l6tp2" podUID="daf15e6c-e319-4b6a-b81a-cb796e8f2eb5" Jan 24 00:57:48.819568 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:58046.service - OpenSSH per-connection server daemon (10.0.0.1:58046). Jan 24 00:57:48.854116 sshd[5865]: Accepted publickey for core from 10.0.0.1 port 58046 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:57:48.856067 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:57:48.860709 systemd-logind[1429]: New session 21 of user core. Jan 24 00:57:48.869605 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:57:48.978173 sshd[5865]: pam_unix(sshd:session): session closed for user core Jan 24 00:57:48.983586 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:58046.service: Deactivated successfully. Jan 24 00:57:48.985515 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:57:48.986267 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:57:48.987742 systemd-logind[1429]: Removed session 21. Jan 24 00:57:49.586411 containerd[1450]: time="2026-01-24T00:57:49.586321532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:57:49.648158 containerd[1450]: time="2026-01-24T00:57:49.648069412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:49.649656 containerd[1450]: time="2026-01-24T00:57:49.649597049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:57:49.649786 containerd[1450]: time="2026-01-24T00:57:49.649635569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:49.649891 kubelet[2488]: E0124 00:57:49.649848 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:49.650216 kubelet[2488]: E0124 00:57:49.649907 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:57:49.650216 kubelet[2488]: E0124 00:57:49.650091 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmz9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5df9b4f89d-kh5pp_calico-apiserver(7fe9f3ea-2686-424b-8279-86ca8e141669): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:49.651796 kubelet[2488]: E0124 00:57:49.651700 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5df9b4f89d-kh5pp" podUID="7fe9f3ea-2686-424b-8279-86ca8e141669" Jan 24 00:57:50.586867 containerd[1450]: time="2026-01-24T00:57:50.586784754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:57:50.646754 containerd[1450]: time="2026-01-24T00:57:50.646657354Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:50.648234 containerd[1450]: time="2026-01-24T00:57:50.648140500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:57:50.648624 containerd[1450]: time="2026-01-24T00:57:50.648231999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:57:50.648657 kubelet[2488]: E0124 00:57:50.648490 2488 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:50.648657 kubelet[2488]: E0124 00:57:50.648548 2488 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:57:50.648964 kubelet[2488]: E0124 00:57:50.648767 2488 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8s6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fw5bw_calico-system(203aa399-08cf-4bd0-a44a-0a01debc5662): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:50.650821 kubelet[2488]: E0124 00:57:50.650771 2488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fw5bw" podUID="203aa399-08cf-4bd0-a44a-0a01debc5662"