Jan 28 00:56:12.560357 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 00:56:12.560388 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:56:12.560407 kernel: BIOS-provided physical RAM map: Jan 28 00:56:12.560416 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 00:56:12.560424 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 00:56:12.560432 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 00:56:12.560443 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 00:56:12.560451 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 00:56:12.560459 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 00:56:12.560471 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 00:56:12.560480 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 00:56:12.560489 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 00:56:12.560523 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 00:56:12.560534 kernel: NX (Execute Disable) protection: active Jan 28 00:56:12.560544 kernel: APIC: Static calls initialized Jan 28 00:56:12.560582 kernel: SMBIOS 2.8 present. Jan 28 00:56:12.560593 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 00:56:12.560603 kernel: Hypervisor detected: KVM Jan 28 00:56:12.560612 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 00:56:12.560621 kernel: kvm-clock: using sched offset of 9787858608 cycles Jan 28 00:56:12.560631 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:56:12.560640 kernel: tsc: Detected 2445.424 MHz processor Jan 28 00:56:12.560651 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 00:56:12.560661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 00:56:12.560675 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 00:56:12.560685 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 00:56:12.560695 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 00:56:12.560704 kernel: Using GB pages for direct mapping Jan 28 00:56:12.560714 kernel: ACPI: Early table checksum verification disabled Jan 28 00:56:12.560723 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 00:56:12.560733 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560744 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560753 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560767 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 00:56:12.560776 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560786 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560796 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560805 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:56:12.560815 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 00:56:12.560826 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 00:56:12.560843 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 00:56:12.560858 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 00:56:12.560869 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 00:56:12.560880 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 00:56:12.560891 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 00:56:12.560902 kernel: No NUMA configuration found Jan 28 00:56:12.560913 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 00:56:12.560968 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 28 00:56:12.560981 kernel: Zone ranges: Jan 28 00:56:12.560990 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 00:56:12.561001 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 00:56:12.561011 kernel: Normal empty Jan 28 00:56:12.561022 kernel: Movable zone start for each node Jan 28 00:56:12.561032 kernel: Early memory node ranges Jan 28 00:56:12.561043 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 00:56:12.561053 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 00:56:12.561065 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 00:56:12.561081 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 00:56:12.561115 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 00:56:12.561127 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 00:56:12.561137 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 00:56:12.561147 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 00:56:12.561157 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 00:56:12.561167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 00:56:12.561177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 00:56:12.561187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 00:56:12.561202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 00:56:12.561213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 00:56:12.561223 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 00:56:12.561234 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 00:56:12.561244 kernel: TSC deadline timer available Jan 28 00:56:12.561253 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 00:56:12.561263 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 00:56:12.561273 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 00:56:12.561347 kernel: kvm-guest: setup PV sched yield Jan 28 00:56:12.561365 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 00:56:12.561377 kernel: Booting paravirtualized kernel on KVM Jan 28 00:56:12.561387 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 00:56:12.561398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 00:56:12.561408 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 00:56:12.561418 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 00:56:12.561428 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 00:56:12.561438 kernel: kvm-guest: PV spinlocks enabled Jan 28 00:56:12.561448 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 00:56:12.561464 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:56:12.561474 kernel: random: crng init done Jan 28 00:56:12.561483 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:56:12.561494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:56:12.561505 kernel: Fallback order for Node 0: 0 Jan 28 00:56:12.561515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 28 00:56:12.561525 kernel: Policy zone: DMA32 Jan 28 00:56:12.561535 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:56:12.561550 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 28 00:56:12.561560 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 00:56:12.561571 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 00:56:12.561582 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 00:56:12.561592 kernel: Dynamic Preempt: voluntary Jan 28 00:56:12.561602 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:56:12.561613 kernel: rcu: RCU event tracing is enabled. Jan 28 00:56:12.561624 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 00:56:12.561635 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:56:12.561650 kernel: Rude variant of Tasks RCU enabled. Jan 28 00:56:12.561660 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:56:12.561670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:56:12.561680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 00:56:12.561713 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 00:56:12.561725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:56:12.561735 kernel: Console: colour VGA+ 80x25 Jan 28 00:56:12.561745 kernel: printk: console [ttyS0] enabled Jan 28 00:56:12.561755 kernel: ACPI: Core revision 20230628 Jan 28 00:56:12.561765 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 00:56:12.561780 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 00:56:12.561790 kernel: x2apic enabled Jan 28 00:56:12.561800 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 00:56:12.561810 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 00:56:12.561821 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 00:56:12.561831 kernel: kvm-guest: setup PV IPIs Jan 28 00:56:12.561842 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 00:56:12.561867 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 00:56:12.561879 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 28 00:56:12.561890 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 00:56:12.561900 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 00:56:12.561915 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 00:56:12.561958 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 00:56:12.561970 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 00:56:12.561981 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 00:56:12.561992 kernel: Speculative Store Bypass: Vulnerable Jan 28 00:56:12.562007 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 00:56:12.562038 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 00:56:12.562051 kernel: active return thunk: srso_alias_return_thunk Jan 28 00:56:12.562063 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 00:56:12.562073 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 00:56:12.562084 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 00:56:12.562095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 00:56:12.562105 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 00:56:12.562120 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 00:56:12.562131 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 00:56:12.562141 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 00:56:12.562152 kernel: Freeing SMP alternatives memory: 32K Jan 28 00:56:12.562163 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:56:12.562173 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 00:56:12.562183 kernel: landlock: Up and running. Jan 28 00:56:12.562193 kernel: SELinux: Initializing. Jan 28 00:56:12.562203 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:56:12.562217 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:56:12.562229 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 00:56:12.562240 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:56:12.562251 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:56:12.562262 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:56:12.562273 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 00:56:12.562283 kernel: signal: max sigframe size: 1776 Jan 28 00:56:12.562294 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:56:12.562368 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:56:12.562388 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 00:56:12.562399 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:56:12.562409 kernel: smpboot: x86: Booting SMP configuration: Jan 28 00:56:12.562420 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 00:56:12.562429 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 00:56:12.562439 kernel: smpboot: Max logical packages: 1 Jan 28 00:56:12.562449 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 28 00:56:12.562459 kernel: devtmpfs: initialized Jan 28 00:56:12.562471 kernel: x86/mm: Memory block size: 128MB Jan 28 00:56:12.562486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:56:12.562497 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 00:56:12.562508 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:56:12.562518 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:56:12.562529 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:56:12.562539 kernel: audit: type=2000 audit(1769561768.882:1): state=initialized audit_enabled=0 res=1 Jan 28 00:56:12.562550 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:56:12.562560 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 00:56:12.562571 kernel: cpuidle: using governor menu Jan 28 00:56:12.562587 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:56:12.562598 kernel: dca service started, version 1.12.1 Jan 28 00:56:12.562609 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 00:56:12.562619 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 00:56:12.562630 kernel: PCI: Using configuration type 1 for base access Jan 28 00:56:12.562641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 00:56:12.562651 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:56:12.562662 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:56:12.562673 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:56:12.562689 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:56:12.562700 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:56:12.562712 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:56:12.562723 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:56:12.562733 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:56:12.562744 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 00:56:12.562754 kernel: ACPI: Interpreter enabled Jan 28 00:56:12.562765 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 00:56:12.562775 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 00:56:12.562790 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 00:56:12.562800 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 00:56:12.562809 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 00:56:12.562820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 00:56:12.563265 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 00:56:12.563534 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 00:56:12.563734 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 00:56:12.563758 kernel: PCI host bridge to bus 0000:00 Jan 28 00:56:12.564045 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 00:56:12.564238 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 00:56:12.564507 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 00:56:12.564687 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 00:56:12.564866 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 00:56:12.565095 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 00:56:12.565384 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 00:56:12.565733 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 00:56:12.566076 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 00:56:12.566271 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 28 00:56:12.566523 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 28 00:56:12.566725 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 28 00:56:12.566961 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 00:56:12.567265 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 00:56:12.567522 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 00:56:12.567714 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 28 00:56:12.567903 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 00:56:12.568224 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 00:56:12.568490 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 00:56:12.568683 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 28 00:56:12.568881 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 00:56:12.569193 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 00:56:12.569464 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 28 00:56:12.569661 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 28 00:56:12.569851 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 00:56:12.570088 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 28 00:56:12.570401 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 00:56:12.570619 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 00:56:12.570967 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 00:56:12.571169 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 28 00:56:12.571418 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 28 00:56:12.571755 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 00:56:12.572000 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 00:56:12.572026 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 00:56:12.572039 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 00:56:12.572050 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 00:56:12.572061 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 00:56:12.572073 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 00:56:12.572084 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 00:56:12.572095 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 00:56:12.572105 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 00:56:12.572116 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 00:56:12.572132 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 00:56:12.572142 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 00:56:12.572154 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 00:56:12.572165 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 00:56:12.572175 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 00:56:12.572186 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 00:56:12.572197 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 00:56:12.572208 kernel: iommu: Default domain type: Translated Jan 28 00:56:12.572219 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 00:56:12.572235 kernel: PCI: Using ACPI for IRQ routing Jan 28 00:56:12.572246 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 00:56:12.572257 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 00:56:12.572267 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 00:56:12.572543 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 00:56:12.572738 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 00:56:12.572976 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 00:56:12.572996 kernel: vgaarb: loaded Jan 28 00:56:12.573014 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 00:56:12.573026 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 00:56:12.573037 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 00:56:12.573048 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:56:12.573059 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:56:12.573070 kernel: pnp: PnP ACPI init Jan 28 00:56:12.573583 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 00:56:12.573601 kernel: pnp: PnP ACPI: found 6 devices Jan 28 00:56:12.573621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 00:56:12.573632 kernel: NET: Registered PF_INET protocol family Jan 28 00:56:12.573642 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:56:12.573652 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:56:12.573663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:56:12.573675 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:56:12.573685 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:56:12.573697 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:56:12.573707 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:56:12.573723 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:56:12.573734 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:56:12.573745 kernel: NET: Registered PF_XDP protocol family Jan 28 00:56:12.573979 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 00:56:12.574171 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 00:56:12.574409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 00:56:12.574588 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 00:56:12.574764 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 00:56:12.574993 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 00:56:12.575011 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:56:12.575022 kernel: Initialise system trusted keyrings Jan 28 00:56:12.575033 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:56:12.575045 kernel: Key type asymmetric registered Jan 28 00:56:12.575056 kernel: Asymmetric key parser 'x509' registered Jan 28 00:56:12.575068 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 00:56:12.575078 kernel: io scheduler mq-deadline registered Jan 28 00:56:12.575089 kernel: io scheduler kyber registered Jan 28 00:56:12.575105 kernel: io scheduler bfq registered Jan 28 00:56:12.575116 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 00:56:12.575127 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 00:56:12.575139 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 00:56:12.575149 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 00:56:12.575161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:56:12.575173 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 00:56:12.575184 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 00:56:12.575195 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 00:56:12.575205 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 00:56:12.575570 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 00:56:12.575590 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 00:56:12.575778 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 00:56:12.576017 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T00:56:11 UTC (1769561771) Jan 28 00:56:12.576249 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 00:56:12.576268 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 00:56:12.576279 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:56:12.576298 kernel: Segment Routing with IPv6 Jan 28 00:56:12.576364 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:56:12.576376 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:56:12.576387 kernel: Key type dns_resolver registered Jan 28 00:56:12.576397 kernel: IPI shorthand broadcast: enabled Jan 28 00:56:12.576408 kernel: sched_clock: Marking stable (2726070607, 567083036)->(3557871490, -264717847) Jan 28 00:56:12.576420 kernel: registered taskstats version 1 Jan 28 00:56:12.576430 kernel: Loading compiled-in X.509 certificates Jan 28 00:56:12.576441 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 00:56:12.576457 kernel: Key type .fscrypt registered Jan 28 00:56:12.576468 kernel: Key type fscrypt-provisioning registered Jan 28 00:56:12.576478 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:56:12.576489 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:56:12.576501 kernel: ima: No architecture policies found Jan 28 00:56:12.576513 kernel: hrtimer: interrupt took 5042025 ns Jan 28 00:56:12.576524 kernel: clk: Disabling unused clocks Jan 28 00:56:12.576536 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 00:56:12.576547 kernel: Write protecting the kernel read-only data: 36864k Jan 28 00:56:12.576564 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 00:56:12.576576 kernel: Run /init as init process Jan 28 00:56:12.576586 kernel: with arguments: Jan 28 00:56:12.576597 kernel: /init Jan 28 00:56:12.576607 kernel: with environment: Jan 28 00:56:12.576618 kernel: HOME=/ Jan 28 00:56:12.576630 kernel: TERM=linux Jan 28 00:56:12.576643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:56:12.576661 systemd[1]: Detected virtualization kvm. Jan 28 00:56:12.576674 systemd[1]: Detected architecture x86-64. Jan 28 00:56:12.576686 systemd[1]: Running in initrd. Jan 28 00:56:12.576698 systemd[1]: No hostname configured, using default hostname. Jan 28 00:56:12.576709 systemd[1]: Hostname set to . Jan 28 00:56:12.576721 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:56:12.578499 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:56:12.578522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:56:12.578594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:56:12.578608 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:56:12.578619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:56:12.578631 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:56:12.578643 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:56:12.578656 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:56:12.578669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:56:12.578687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:56:12.578699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:56:12.578710 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:56:12.578722 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:56:12.578752 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:56:12.578768 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:56:12.578784 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:56:12.578795 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:56:12.578808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:56:12.578820 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 00:56:12.578832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:56:12.578844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:56:12.578855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:56:12.578867 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:56:12.578879 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:56:12.578895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:56:12.578907 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:56:12.578920 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:56:12.579097 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:56:12.579108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:56:12.579120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:12.579483 systemd-journald[195]: Collecting audit messages is disabled. Jan 28 00:56:12.579523 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:56:12.579536 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:56:12.579548 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:56:12.579567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:56:12.579580 systemd-journald[195]: Journal started Jan 28 00:56:12.579602 systemd-journald[195]: Runtime Journal (/run/log/journal/36f0e8e2e4924b629b7b669c7f5d4db7) is 6.0M, max 48.4M, 42.3M free. Jan 28 00:56:12.588355 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:56:12.590390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:56:12.606864 systemd-modules-load[196]: Inserted module 'overlay' Jan 28 00:56:12.610973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:56:12.636731 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:56:12.659603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:56:12.862658 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:56:12.862710 kernel: Bridge firewalling registered Jan 28 00:56:12.678087 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 28 00:56:12.870579 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:56:12.875398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:12.886669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:56:12.905874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:56:12.913848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:56:12.942582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:56:12.957742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:56:12.958388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:12.966438 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:56:13.015293 dracut-cmdline[235]: dracut-dracut-053 Jan 28 00:56:13.020878 dracut-cmdline[235]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:56:13.028794 systemd-resolved[232]: Positive Trust Anchors: Jan 28 00:56:13.028806 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:56:13.028847 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:56:13.034000 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 28 00:56:13.037034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:56:13.043900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:56:13.174664 kernel: SCSI subsystem initialized Jan 28 00:56:13.195890 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:56:13.213612 kernel: iscsi: registered transport (tcp) Jan 28 00:56:13.253727 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:56:13.254029 kernel: QLogic iSCSI HBA Driver Jan 28 00:56:13.356655 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:56:13.377721 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:56:13.421402 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:56:13.421476 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:56:13.426084 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 00:56:13.487444 kernel: raid6: avx2x4 gen() 20337 MB/s Jan 28 00:56:13.506395 kernel: raid6: avx2x2 gen() 19211 MB/s Jan 28 00:56:13.527341 kernel: raid6: avx2x1 gen() 11641 MB/s Jan 28 00:56:13.527418 kernel: raid6: using algorithm avx2x4 gen() 20337 MB/s Jan 28 00:56:13.551657 kernel: raid6: .... xor() 4048 MB/s, rmw enabled Jan 28 00:56:13.554983 kernel: raid6: using avx2x2 recovery algorithm Jan 28 00:56:13.588579 kernel: xor: automatically using best checksumming function avx Jan 28 00:56:13.763527 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:56:13.855056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:56:14.665457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:56:14.760060 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jan 28 00:56:14.816228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:56:14.833624 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:56:14.917203 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 28 00:56:15.173284 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:56:15.254857 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:56:15.473587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:56:15.499803 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:56:15.529733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:56:15.537200 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:56:15.551686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:56:15.558529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:56:15.576609 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:56:15.601874 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 00:56:15.613010 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:56:15.635239 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 00:56:15.640492 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 00:56:15.644423 kernel: libata version 3.00 loaded. Jan 28 00:56:15.653236 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 00:56:15.653483 kernel: GPT:9289727 != 19775487 Jan 28 00:56:15.653544 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 00:56:15.657803 kernel: GPT:9289727 != 19775487 Jan 28 00:56:15.657835 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:56:15.664120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:15.664157 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 00:56:15.664752 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 00:56:15.665236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:56:15.665516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:15.678889 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:56:15.700585 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 00:56:15.700615 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 00:56:15.701210 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 00:56:15.701541 kernel: AES CTR mode by8 optimization enabled Jan 28 00:56:15.706819 kernel: scsi host0: ahci Jan 28 00:56:15.707249 kernel: scsi host1: ahci Jan 28 00:56:15.708752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:56:15.715161 kernel: scsi host2: ahci Jan 28 00:56:15.711991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:15.722887 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:16.298222 kernel: scsi host3: ahci Jan 28 00:56:16.298622 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (473) Jan 28 00:56:16.302973 kernel: scsi host4: ahci Jan 28 00:56:16.307402 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Jan 28 00:56:16.311836 kernel: scsi host5: ahci Jan 28 00:56:16.312255 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 28 00:56:16.312276 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 28 00:56:16.312837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:16.324814 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 28 00:56:16.324891 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 28 00:56:16.324914 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 28 00:56:16.329803 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 28 00:56:16.357508 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 00:56:16.368539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 00:56:16.374904 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 00:56:16.376255 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 00:56:16.392275 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:56:16.424754 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:56:16.436378 disk-uuid[555]: Primary Header is updated. Jan 28 00:56:16.436378 disk-uuid[555]: Secondary Entries is updated. Jan 28 00:56:16.436378 disk-uuid[555]: Secondary Header is updated. Jan 28 00:56:16.854995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:16.855029 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:16.855040 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:16.855071 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:16.855082 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:16.855091 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 00:56:16.855101 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 00:56:16.855118 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 00:56:16.855128 kernel: ata3.00: applying bridge limits Jan 28 00:56:16.855138 kernel: ata3.00: configured for UDMA/100 Jan 28 00:56:16.855147 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 00:56:16.855567 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 00:56:16.855762 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:56:16.860569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:16.877746 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:56:16.897407 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 00:56:16.919493 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:17.463384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:56:17.464008 disk-uuid[556]: The operation has completed successfully. Jan 28 00:56:17.521692 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:56:17.521915 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:56:17.558849 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:56:17.568060 sh[597]: Success Jan 28 00:56:17.593446 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 00:56:17.660356 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:56:17.684602 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:56:17.696554 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:56:17.719729 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 00:56:17.719786 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:17.719854 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 00:56:17.722918 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:56:17.725420 kernel: BTRFS info (device dm-0): using free space tree Jan 28 00:56:17.747695 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:56:17.754009 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:56:17.769764 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:56:17.779580 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:56:17.817834 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:17.818235 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:17.818258 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:56:17.826489 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:56:17.850087 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 00:56:17.857210 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:17.866711 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:56:17.879812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:56:18.421982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:56:18.728825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:56:18.744479 ignition[671]: Ignition 2.19.0 Jan 28 00:56:18.744543 ignition[671]: Stage: fetch-offline Jan 28 00:56:18.744643 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:18.744671 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:18.745181 ignition[671]: parsed url from cmdline: "" Jan 28 00:56:18.745187 ignition[671]: no config URL provided Jan 28 00:56:18.745225 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:56:18.745245 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:56:18.745392 ignition[671]: op(1): [started] loading QEMU firmware config module Jan 28 00:56:18.745406 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 00:56:18.767838 systemd-networkd[784]: lo: Link UP Jan 28 00:56:18.767863 systemd-networkd[784]: lo: Gained carrier Jan 28 00:56:18.770353 systemd-networkd[784]: Enumeration completed Jan 28 00:56:18.770537 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:56:18.771295 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:18.771340 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:56:18.775666 systemd-networkd[784]: eth0: Link UP Jan 28 00:56:18.775672 systemd-networkd[784]: eth0: Gained carrier Jan 28 00:56:18.775680 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:18.813778 ignition[671]: op(1): [finished] loading QEMU firmware config module Jan 28 00:56:18.781460 systemd[1]: Reached target network.target - Network. Jan 28 00:56:18.826456 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:56:19.041027 ignition[671]: parsing config with SHA512: 12f142004c6079923b3d70a721b3cf466b14e62ccb5772c06de5fd12cd1296a0f4e0eb861964d0603acb95da5ee55f97620bbe9b19fc287ae9db902c3d6b8d26 Jan 28 00:56:19.118478 unknown[671]: fetched base config from "system" Jan 28 00:56:19.118901 unknown[671]: fetched user config from "qemu" Jan 28 00:56:19.121866 ignition[671]: fetch-offline: fetch-offline passed Jan 28 00:56:19.122553 ignition[671]: Ignition finished successfully Jan 28 00:56:19.140784 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:56:19.151263 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 00:56:19.191458 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:56:19.425901 ignition[790]: Ignition 2.19.0 Jan 28 00:56:19.425977 ignition[790]: Stage: kargs Jan 28 00:56:19.426543 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:19.426558 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:19.433008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:56:19.428213 ignition[790]: kargs: kargs passed Jan 28 00:56:19.428284 ignition[790]: Ignition finished successfully Jan 28 00:56:19.454529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:56:19.500421 ignition[798]: Ignition 2.19.0 Jan 28 00:56:19.500458 ignition[798]: Stage: disks Jan 28 00:56:19.503902 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:56:19.500629 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:19.511699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:56:19.500642 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:19.518104 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:56:19.502067 ignition[798]: disks: disks passed Jan 28 00:56:19.522684 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:56:19.502143 ignition[798]: Ignition finished successfully Jan 28 00:56:19.522852 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:56:19.523971 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:56:19.542699 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:56:19.571771 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 00:56:19.592832 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:56:19.605701 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:56:20.328392 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 00:56:20.329648 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:56:20.334563 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:56:20.358521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:56:20.367883 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:56:20.373367 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jan 28 00:56:20.380219 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 00:56:20.417794 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:20.417833 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:20.417846 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:56:20.417857 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:56:20.380747 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:56:20.380790 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:56:20.413049 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:56:20.421875 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:56:20.439797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:56:20.601483 systemd-networkd[784]: eth0: Gained IPv6LL Jan 28 00:56:20.606751 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:56:20.619705 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:56:20.632262 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:56:20.647657 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:56:20.854468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:56:20.876582 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:56:20.886818 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:56:20.896605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:56:20.904009 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:20.928809 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:56:20.973454 ignition[930]: INFO : Ignition 2.19.0 Jan 28 00:56:20.973454 ignition[930]: INFO : Stage: mount Jan 28 00:56:20.981674 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:20.981674 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:20.981674 ignition[930]: INFO : mount: mount passed Jan 28 00:56:20.981674 ignition[930]: INFO : Ignition finished successfully Jan 28 00:56:20.996936 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:56:21.011720 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:56:21.360443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:56:21.419711 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 28 00:56:21.426452 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:56:21.426492 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:56:21.426508 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:56:21.440514 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:56:21.445221 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:56:21.490541 ignition[959]: INFO : Ignition 2.19.0 Jan 28 00:56:21.490541 ignition[959]: INFO : Stage: files Jan 28 00:56:21.496418 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:21.496418 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:21.496418 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:56:21.496418 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:56:21.496418 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:56:21.516022 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:56:21.516022 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:56:21.516022 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:56:21.516022 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 00:56:21.516022 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 00:56:21.516022 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 00:56:21.516022 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 00:56:21.499697 unknown[959]: wrote ssh authorized keys file for user: core Jan 28 00:56:21.572432 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 00:56:22.001774 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 00:56:22.001774 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:56:22.001774 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:56:22.001774 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:56:22.021873 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:56:22.026599 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:56:22.032159 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:56:22.037112 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:56:22.042257 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:56:22.047388 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:56:22.052736 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:56:22.058273 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:22.066239 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:22.073910 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:22.084705 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 00:56:22.440379 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 00:56:23.924814 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:56:23.924814 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 28 00:56:23.936915 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 00:56:23.943273 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 00:56:23.943273 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 28 00:56:23.943273 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 28 00:56:23.955915 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:56:23.955915 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:56:23.955915 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 28 00:56:23.973130 ignition[959]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 28 00:56:23.973130 ignition[959]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:56:23.973130 ignition[959]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:56:23.993618 ignition[959]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 28 00:56:23.993618 ignition[959]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 00:56:24.135901 ignition[959]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:56:24.144497 ignition[959]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:56:24.150060 ignition[959]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 00:56:24.150060 ignition[959]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:56:24.158652 ignition[959]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:56:24.162629 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:56:24.168363 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:56:24.173708 ignition[959]: INFO : files: files passed Jan 28 00:56:24.176193 ignition[959]: INFO : Ignition finished successfully Jan 28 00:56:24.177466 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:56:24.199563 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:56:24.203824 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:56:24.218516 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:56:24.218685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:56:24.228908 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 00:56:24.236925 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:56:24.236925 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:56:24.230127 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:56:24.251989 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:56:24.237211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:56:24.264612 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:56:24.306838 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:56:24.307185 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:56:24.311050 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:56:24.318348 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:56:24.327861 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:56:24.350731 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:56:24.390246 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:56:24.408599 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:56:24.422069 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:56:24.426872 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:56:24.435221 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:56:24.443751 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:56:24.444020 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:56:24.451501 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:56:24.459515 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:56:24.467907 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:56:24.472583 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:56:24.481401 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:56:24.489658 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:56:24.498276 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:56:24.503514 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:56:24.511782 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:56:24.519739 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:56:24.525773 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:56:24.526082 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:56:24.535860 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:56:24.542179 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:56:24.546238 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:56:24.546699 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:56:24.554798 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:56:24.555021 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:56:24.560555 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:56:24.560692 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:56:24.568938 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:56:24.576935 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:56:24.583693 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:56:24.593750 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:56:24.605653 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:56:24.612105 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:56:24.612374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:56:24.617813 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:56:24.618055 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:56:24.624251 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:56:24.624546 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:56:24.688446 ignition[1013]: INFO : Ignition 2.19.0 Jan 28 00:56:24.688446 ignition[1013]: INFO : Stage: umount Jan 28 00:56:24.688446 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:56:24.688446 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:56:24.688446 ignition[1013]: INFO : umount: umount passed Jan 28 00:56:24.688446 ignition[1013]: INFO : Ignition finished successfully Jan 28 00:56:24.632401 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:56:24.632618 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:56:24.652819 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:56:24.657481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:56:24.657733 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:56:24.665914 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:56:24.669399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:56:24.669680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:56:24.676622 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:56:24.676918 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:56:24.689720 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:56:24.689934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:56:24.698834 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:56:24.699082 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:56:24.709653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:56:24.713820 systemd[1]: Stopped target network.target - Network. Jan 28 00:56:24.721243 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:56:24.721432 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:56:24.730629 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:56:24.730719 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:56:24.746927 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:56:24.747083 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:56:24.754697 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:56:24.754788 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:56:24.763023 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:56:24.771707 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:56:24.790454 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 28 00:56:24.790730 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:56:24.790893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:56:24.819934 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:56:24.829887 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:56:24.873099 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:56:24.876372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:56:24.886647 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:56:24.886747 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:56:24.897073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:56:24.897188 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:56:24.921550 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:56:24.925642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:56:24.929631 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:56:24.942390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:56:24.946034 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:56:24.956439 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:56:24.961396 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:56:24.972208 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:56:24.972628 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:56:24.987499 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:56:25.014599 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:56:25.014985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:56:25.018662 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:56:25.018838 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:56:25.027347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:56:25.027492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:56:25.033474 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:56:25.033556 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:56:25.044403 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:56:25.044517 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:56:25.056490 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:56:25.056622 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:56:25.086628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:56:25.086747 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:56:25.109617 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:56:25.117457 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:56:25.117559 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:56:25.130675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:56:25.134579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:25.146661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:56:25.146874 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:56:25.160213 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:56:25.195365 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:56:25.245494 systemd[1]: Switching root. Jan 28 00:56:25.295298 systemd-journald[195]: Journal stopped Jan 28 00:56:27.287104 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 28 00:56:27.287255 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:56:27.287280 kernel: SELinux: policy capability open_perms=1 Jan 28 00:56:27.287291 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:56:27.287356 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:56:27.287369 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:56:27.287387 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:56:27.287398 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:56:27.287415 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:56:27.287426 kernel: audit: type=1403 audit(1769561785.606:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:56:27.287439 systemd[1]: Successfully loaded SELinux policy in 68.290ms. Jan 28 00:56:27.287489 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.295ms. Jan 28 00:56:27.287503 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:56:27.287533 systemd[1]: Detected virtualization kvm. Jan 28 00:56:27.287545 systemd[1]: Detected architecture x86-64. Jan 28 00:56:27.287558 systemd[1]: Detected first boot. Jan 28 00:56:27.287569 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:56:27.287581 zram_generator::config[1075]: No configuration found. Jan 28 00:56:27.287594 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:56:27.287627 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:56:27.287639 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 00:56:27.287652 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:56:27.287664 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:56:27.287675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:56:27.287687 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:56:27.287699 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:56:27.287710 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:56:27.287722 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:56:27.287754 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:56:27.287766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:56:27.287778 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:56:27.287790 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:56:27.287802 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:56:27.287814 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:56:27.287828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:56:27.287839 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 00:56:27.287871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:56:27.287883 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:56:27.287895 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:56:27.287906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:56:27.287918 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:56:27.287930 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:56:27.287942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:56:27.287992 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:56:27.288018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:56:27.288031 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 00:56:27.288043 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:56:27.288054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:56:27.288066 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:56:27.288078 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:56:27.288089 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:56:27.288101 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:56:27.288112 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:56:27.288124 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:27.288160 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:56:27.288183 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:56:27.288203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:56:27.288221 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:56:27.288241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:27.288258 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:56:27.288276 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:56:27.288296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:27.288373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:56:27.288386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:27.288398 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:56:27.288410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:56:27.288422 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:56:27.288434 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 28 00:56:27.288446 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 28 00:56:27.288458 kernel: fuse: init (API version 7.39) Jan 28 00:56:27.288495 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:56:27.288508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:56:27.288520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:56:27.288531 kernel: ACPI: bus type drm_connector registered Jan 28 00:56:27.288542 kernel: loop: module loaded Jan 28 00:56:27.288579 systemd-journald[1174]: Collecting audit messages is disabled. Jan 28 00:56:27.288603 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:56:27.288616 systemd-journald[1174]: Journal started Jan 28 00:56:27.288659 systemd-journald[1174]: Runtime Journal (/run/log/journal/36f0e8e2e4924b629b7b669c7f5d4db7) is 6.0M, max 48.4M, 42.3M free. Jan 28 00:56:27.310398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:56:27.320450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:27.328784 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:56:27.333392 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:56:27.336898 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:56:27.340644 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:56:27.343829 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:56:27.347478 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:56:27.351188 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:56:27.356153 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:56:27.361719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:56:27.366131 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:56:27.366572 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:56:27.370682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:27.371012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:27.374875 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:56:27.375275 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:56:27.382715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:27.383122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:27.387275 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:56:27.387700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:56:27.391238 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:56:27.391604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:56:27.395148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:56:27.399085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:56:27.403206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:56:27.422687 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:56:27.438448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:56:27.445449 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:56:27.448461 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:56:27.450906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:56:27.455495 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:56:27.458629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:56:27.464817 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:56:27.467817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:56:27.476619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:56:27.490831 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:56:27.497606 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:56:27.501557 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:56:27.503244 systemd-journald[1174]: Time spent on flushing to /var/log/journal/36f0e8e2e4924b629b7b669c7f5d4db7 is 31.695ms for 931 entries. Jan 28 00:56:27.503244 systemd-journald[1174]: System Journal (/var/log/journal/36f0e8e2e4924b629b7b669c7f5d4db7) is 8.0M, max 195.6M, 187.6M free. Jan 28 00:56:27.628056 systemd-journald[1174]: Received client request to flush runtime journal. Jan 28 00:56:27.522734 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:56:27.529530 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:56:27.615738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:56:27.623229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:56:27.660716 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 28 00:56:27.660736 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 28 00:56:27.664576 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 00:56:27.668692 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:56:27.672772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:56:27.682940 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:56:27.694186 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 00:56:27.749810 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:56:27.759528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:56:27.843076 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 28 00:56:27.843111 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 28 00:56:27.851880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:56:28.698544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:56:28.714599 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:56:28.758234 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 28 00:56:28.791638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:56:28.806546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:56:28.837578 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:56:28.862778 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 28 00:56:28.931858 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:56:28.949380 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 00:56:28.957411 kernel: ACPI: button: Power Button [PWRF] Jan 28 00:56:28.985811 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1256) Jan 28 00:56:29.065250 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 00:56:29.066168 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 00:56:29.066849 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 00:56:29.091742 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 00:56:29.211141 systemd-networkd[1249]: lo: Link UP Jan 28 00:56:29.215842 systemd-networkd[1249]: lo: Gained carrier Jan 28 00:56:29.229078 systemd-networkd[1249]: Enumeration completed Jan 28 00:56:29.230479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:56:29.230491 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:29.230499 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:56:29.232410 systemd-networkd[1249]: eth0: Link UP Jan 28 00:56:29.232560 systemd-networkd[1249]: eth0: Gained carrier Jan 28 00:56:29.232687 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:56:29.236667 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:56:29.250492 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:56:29.255396 systemd-networkd[1249]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:56:29.275407 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:56:29.286624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:56:29.628385 kernel: kvm_amd: TSC scaling supported Jan 28 00:56:29.628906 kernel: kvm_amd: Nested Virtualization enabled Jan 28 00:56:29.628938 kernel: kvm_amd: Nested Paging enabled Jan 28 00:56:29.631469 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 00:56:29.631545 kernel: kvm_amd: PMU virtualization is disabled Jan 28 00:56:29.730426 kernel: EDAC MC: Ver: 3.0.0 Jan 28 00:56:29.774687 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 00:56:29.798685 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 00:56:29.839581 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:56:29.924206 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 00:56:30.023005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:56:30.038854 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 00:56:30.043546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:56:30.110216 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:56:30.160934 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 00:56:30.165871 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:56:30.170514 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:56:30.170608 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:56:30.174043 systemd[1]: Reached target machines.target - Containers. Jan 28 00:56:30.179767 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 00:56:30.208777 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:56:30.215252 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:56:30.221187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:30.223578 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:56:30.234133 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 00:56:30.242512 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:56:30.251190 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:56:30.288402 kernel: loop0: detected capacity change from 0 to 142488 Jan 28 00:56:30.294377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:56:30.314091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:56:30.321101 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 00:56:30.329223 systemd-networkd[1249]: eth0: Gained IPv6LL Jan 28 00:56:30.335294 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:56:30.347407 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:56:30.403378 kernel: loop1: detected capacity change from 0 to 140768 Jan 28 00:56:30.457358 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 00:56:30.512442 kernel: loop3: detected capacity change from 0 to 142488 Jan 28 00:56:30.665851 kernel: loop4: detected capacity change from 0 to 140768 Jan 28 00:56:30.710189 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 00:56:30.722560 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 00:56:30.724066 (sd-merge)[1312]: Merged extensions into '/usr'. Jan 28 00:56:30.733817 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:56:30.733868 systemd[1]: Reloading... Jan 28 00:56:31.041374 zram_generator::config[1340]: No configuration found. Jan 28 00:56:31.367811 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:56:31.445532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:56:31.652657 systemd[1]: Reloading finished in 917 ms. Jan 28 00:56:31.694843 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:56:31.700097 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:56:31.759622 systemd[1]: Starting ensure-sysext.service... Jan 28 00:56:31.765542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:56:31.819725 systemd[1]: Reloading requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:56:31.819784 systemd[1]: Reloading... Jan 28 00:56:31.876191 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:56:31.876820 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:56:31.888798 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:56:31.891058 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 28 00:56:31.891157 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 28 00:56:31.899139 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:56:31.899183 systemd-tmpfiles[1385]: Skipping /boot Jan 28 00:56:31.906426 zram_generator::config[1412]: No configuration found. Jan 28 00:56:32.738150 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:56:32.738214 systemd-tmpfiles[1385]: Skipping /boot Jan 28 00:56:33.858102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:56:34.075901 systemd[1]: Reloading finished in 2255 ms. Jan 28 00:56:34.198740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:56:34.213704 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:56:34.219566 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:56:34.226704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:56:34.234940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:56:34.241576 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:56:34.247964 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:34.248182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:34.250490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:34.259101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:34.276910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:56:34.295578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:34.295795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:34.298097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:34.298525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:34.305042 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:56:34.313081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:34.314789 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:34.321146 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:56:34.321668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:56:34.336814 augenrules[1486]: No rules Jan 28 00:56:34.337731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:56:34.338148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:56:34.345820 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:56:34.352046 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:56:34.362391 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:56:34.369786 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:56:34.379543 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:56:34.401911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:34.402469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:34.409752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:34.415065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:34.421733 systemd-resolved[1461]: Positive Trust Anchors: Jan 28 00:56:34.421754 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:56:34.421803 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:56:34.423612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:56:34.427896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:34.428159 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:56:34.428233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:34.429781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:34.430183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:34.433160 systemd-resolved[1461]: Defaulting to hostname 'linux'. Jan 28 00:56:34.435463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:34.435717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:34.439507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:56:34.458213 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:56:34.458596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:56:34.463893 systemd[1]: Reached target network.target - Network. Jan 28 00:56:34.467579 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:56:34.471374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:56:34.475612 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:34.475827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:56:34.491699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:56:34.496260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:56:34.500290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:56:34.503864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:56:34.503963 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:56:34.504043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:56:34.505297 systemd[1]: Finished ensure-sysext.service. Jan 28 00:56:34.508593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:56:34.508846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:56:34.514087 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:56:34.514467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:56:34.518750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:56:34.519134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:56:34.528622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:56:34.528778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:56:34.541492 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 00:56:34.687479 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 00:56:35.231804 systemd-resolved[1461]: Clock change detected. Flushing caches. Jan 28 00:56:35.231830 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 00:56:35.231930 systemd-timesyncd[1527]: Initial clock synchronization to Wed 2026-01-28 00:56:35.231573 UTC. Jan 28 00:56:35.234538 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:56:35.238295 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:56:35.243496 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:56:35.249167 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:56:35.254031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:56:35.254091 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:56:35.257046 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:56:35.260660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:56:35.276962 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:56:35.282628 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:56:35.289904 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:56:35.332497 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:56:35.340619 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:56:35.346329 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:56:35.359035 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:56:35.363228 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:56:35.368010 systemd[1]: System is tainted: cgroupsv1 Jan 28 00:56:35.368113 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:56:35.368156 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:56:35.390840 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:56:35.414157 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 00:56:35.433190 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:56:35.449556 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:56:35.485066 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:56:35.497120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:56:35.514102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:56:35.514601 jq[1536]: false Jan 28 00:56:35.541969 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:56:35.554163 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:56:35.568945 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:56:35.573880 extend-filesystems[1537]: Found loop3 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found loop4 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found loop5 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found sr0 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda1 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda2 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda3 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found usr Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda4 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda6 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda7 Jan 28 00:56:35.585450 extend-filesystems[1537]: Found vda9 Jan 28 00:56:35.585450 extend-filesystems[1537]: Checking size of /dev/vda9 Jan 28 00:56:35.686844 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 00:56:35.641292 dbus-daemon[1534]: [system] SELinux support is enabled Jan 28 00:56:35.605213 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:56:35.693479 extend-filesystems[1537]: Resized partition /dev/vda9 Jan 28 00:56:35.699521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:56:35.699904 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Jan 28 00:56:35.715513 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:56:35.753001 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:56:35.770787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1565) Jan 28 00:56:35.803764 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 00:56:35.799294 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:56:35.854158 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 00:56:35.854158 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 00:56:35.854158 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 00:56:35.872166 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Jan 28 00:56:35.879918 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:56:35.892268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:56:35.949330 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:56:35.949918 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:56:35.950526 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:56:35.953065 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:56:35.957193 jq[1578]: true Jan 28 00:56:35.961940 update_engine[1576]: I20260128 00:56:35.957609 1576 main.cc:92] Flatcar Update Engine starting Jan 28 00:56:35.966150 update_engine[1576]: I20260128 00:56:35.964951 1576 update_check_scheduler.cc:74] Next update check in 11m40s Jan 28 00:56:35.969848 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:56:35.970324 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:56:35.975970 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:56:35.983236 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:56:35.983859 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:56:36.005961 jq[1589]: true Jan 28 00:56:36.009015 systemd-logind[1566]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 00:56:36.009066 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:56:36.011887 systemd-logind[1566]: New seat seat0. Jan 28 00:56:36.013120 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:56:36.050080 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:56:36.062638 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 00:56:36.063537 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 00:56:36.102494 dbus-daemon[1534]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 00:56:36.104964 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:56:36.117141 tar[1588]: linux-amd64/LICENSE Jan 28 00:56:36.121203 tar[1588]: linux-amd64/helm Jan 28 00:56:36.160627 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:56:36.171190 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:56:36.171661 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:56:36.173648 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:56:36.184852 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:56:36.185026 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:56:36.204747 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:56:36.215011 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:56:36.219837 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:56:36.251562 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:56:36.270418 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:56:36.315576 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:56:36.319265 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:56:36.418503 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:56:36.419144 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:56:36.453201 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:56:36.458789 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:56:36.573255 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:56:37.039761 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:56:37.063131 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 00:56:37.068410 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:56:38.079973 containerd[1590]: time="2026-01-28T00:56:38.079478665Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 00:56:38.157328 containerd[1590]: time="2026-01-28T00:56:38.156552541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.162814144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.162855471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.162915653Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.163324146Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.163430796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.163543175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:56:38.163664 containerd[1590]: time="2026-01-28T00:56:38.163570396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.164149 containerd[1590]: time="2026-01-28T00:56:38.164084907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:56:38.164149 containerd[1590]: time="2026-01-28T00:56:38.164140721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.164198 containerd[1590]: time="2026-01-28T00:56:38.164164816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:56:38.164198 containerd[1590]: time="2026-01-28T00:56:38.164181747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.164510 containerd[1590]: time="2026-01-28T00:56:38.164458885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.165021 containerd[1590]: time="2026-01-28T00:56:38.164972635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:56:38.165284 containerd[1590]: time="2026-01-28T00:56:38.165242178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:56:38.165315 containerd[1590]: time="2026-01-28T00:56:38.165286841Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 00:56:38.165585 containerd[1590]: time="2026-01-28T00:56:38.165528673Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 00:56:38.165740 containerd[1590]: time="2026-01-28T00:56:38.165636744Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:56:38.182614 containerd[1590]: time="2026-01-28T00:56:38.181824943Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 00:56:38.182614 containerd[1590]: time="2026-01-28T00:56:38.182605280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 00:56:38.182614 containerd[1590]: time="2026-01-28T00:56:38.182631610Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 00:56:38.182614 containerd[1590]: time="2026-01-28T00:56:38.182722660Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 00:56:38.182614 containerd[1590]: time="2026-01-28T00:56:38.182796418Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.184002991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.185591407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186011842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186078406Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186136725Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186161421Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186208540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186255727Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186311071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186332390Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186349632Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186425214Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186446613Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 00:56:38.186654 containerd[1590]: time="2026-01-28T00:56:38.186523507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186550367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186569574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186589501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186634054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186775187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186802278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186889721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186958640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.186985510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.187004596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.187045041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187108 containerd[1590]: time="2026-01-28T00:56:38.187063215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187162040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187222753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187245596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187262267Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187501453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187537060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187555194Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187576774Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187593114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187634362Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 00:56:38.187799 containerd[1590]: time="2026-01-28T00:56:38.187670750Z" level=info msg="NRI interface is disabled by configuration." Jan 28 00:56:38.188236 containerd[1590]: time="2026-01-28T00:56:38.187838623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 00:56:38.189018 containerd[1590]: time="2026-01-28T00:56:38.188864399Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 00:56:38.189018 containerd[1590]: time="2026-01-28T00:56:38.189055826Z" level=info msg="Connect containerd service" Jan 28 00:56:38.190279 containerd[1590]: time="2026-01-28T00:56:38.189164219Z" level=info msg="using legacy CRI server" Jan 28 00:56:38.190279 containerd[1590]: time="2026-01-28T00:56:38.189222377Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:56:38.190279 containerd[1590]: time="2026-01-28T00:56:38.189824161Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 00:56:38.191850 containerd[1590]: time="2026-01-28T00:56:38.191658746Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192246554Z" level=info msg="Start subscribing containerd event" Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192547527Z" level=info msg="Start recovering state" Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192639949Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192820626Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192859218Z" level=info msg="Start event monitor" Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192892480Z" level=info msg="Start snapshots syncer" Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192941682Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:56:38.193408 containerd[1590]: time="2026-01-28T00:56:38.192979643Z" level=info msg="Start streaming server" Jan 28 00:56:38.198225 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:56:38.202313 containerd[1590]: time="2026-01-28T00:56:38.198561532Z" level=info msg="containerd successfully booted in 0.206541s" Jan 28 00:56:38.719013 tar[1588]: linux-amd64/README.md Jan 28 00:56:38.792676 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:56:40.476354 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:56:40.563473 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:59570.service - OpenSSH per-connection server daemon (10.0.0.1:59570). Jan 28 00:56:41.149165 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 59570 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:41.157193 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:41.174636 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:56:41.200257 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:56:41.208943 systemd-logind[1566]: New session 1 of user core. Jan 28 00:56:41.486162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:56:41.503485 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:56:41.549562 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:56:42.432347 systemd[1674]: Queued start job for default target default.target. Jan 28 00:56:42.433590 systemd[1674]: Created slice app.slice - User Application Slice. Jan 28 00:56:42.433618 systemd[1674]: Reached target paths.target - Paths. Jan 28 00:56:42.433634 systemd[1674]: Reached target timers.target - Timers. Jan 28 00:56:42.455859 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:56:42.473631 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:56:42.473791 systemd[1674]: Reached target sockets.target - Sockets. Jan 28 00:56:42.473808 systemd[1674]: Reached target basic.target - Basic System. Jan 28 00:56:42.473860 systemd[1674]: Reached target default.target - Main User Target. Jan 28 00:56:42.473903 systemd[1674]: Startup finished in 827ms. Jan 28 00:56:42.475513 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:56:42.486201 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:56:42.721094 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:48760.service - OpenSSH per-connection server daemon (10.0.0.1:48760). Jan 28 00:56:42.823883 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 48760 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:42.832223 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:42.843351 systemd-logind[1566]: New session 2 of user core. Jan 28 00:56:42.854509 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:56:42.991151 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:42.997982 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:48772.service - OpenSSH per-connection server daemon (10.0.0.1:48772). Jan 28 00:56:42.998587 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:48760.service: Deactivated successfully. Jan 28 00:56:43.007928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:56:43.042466 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 00:56:43.054285 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Jan 28 00:56:43.057313 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:56:43.057850 systemd[1]: Startup finished in 16.669s (kernel) + 16.976s (userspace) = 33.646s. Jan 28 00:56:43.061001 systemd-logind[1566]: Removed session 2. Jan 28 00:56:43.061330 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:56:43.139211 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 48772 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:43.149256 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:43.158609 systemd-logind[1566]: New session 3 of user core. Jan 28 00:56:43.176222 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:56:43.254887 sshd[1701]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:43.260284 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:48772.service: Deactivated successfully. Jan 28 00:56:43.265880 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Jan 28 00:56:43.266123 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 00:56:43.268288 systemd-logind[1566]: Removed session 3. Jan 28 00:56:45.015127 kubelet[1705]: E0128 00:56:45.014676 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:56:45.019929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:56:45.020325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:56:53.288343 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:41804.service - OpenSSH per-connection server daemon (10.0.0.1:41804). Jan 28 00:56:53.383483 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 41804 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:53.459661 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:53.471091 systemd-logind[1566]: New session 4 of user core. Jan 28 00:56:53.484558 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:56:53.567786 sshd[1723]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:53.579022 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:41812.service - OpenSSH per-connection server daemon (10.0.0.1:41812). Jan 28 00:56:53.579835 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:41804.service: Deactivated successfully. Jan 28 00:56:53.585075 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:56:53.586482 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:56:53.589168 systemd-logind[1566]: Removed session 4. Jan 28 00:56:53.642620 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 41812 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:53.645239 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:53.658012 systemd-logind[1566]: New session 5 of user core. Jan 28 00:56:53.666332 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:56:53.743200 sshd[1728]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:53.751262 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:41818.service - OpenSSH per-connection server daemon (10.0.0.1:41818). Jan 28 00:56:53.752563 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:41812.service: Deactivated successfully. Jan 28 00:56:53.759264 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:56:53.760013 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:56:53.765284 systemd-logind[1566]: Removed session 5. Jan 28 00:56:53.800527 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 41818 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:53.803338 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:53.813031 systemd-logind[1566]: New session 6 of user core. Jan 28 00:56:53.823042 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:56:53.922330 sshd[1736]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:53.946458 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:41826.service - OpenSSH per-connection server daemon (10.0.0.1:41826). Jan 28 00:56:53.950938 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:41818.service: Deactivated successfully. Jan 28 00:56:53.963141 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:56:53.966221 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:56:53.968896 systemd-logind[1566]: Removed session 6. Jan 28 00:56:54.003751 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 41826 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:54.006226 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:54.015124 systemd-logind[1566]: New session 7 of user core. Jan 28 00:56:54.035439 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:56:54.152192 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:56:54.152867 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:56:54.176824 sudo[1751]: pam_unix(sudo:session): session closed for user root Jan 28 00:56:54.180011 sshd[1744]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:54.187990 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:41830.service - OpenSSH per-connection server daemon (10.0.0.1:41830). Jan 28 00:56:54.188755 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:41826.service: Deactivated successfully. Jan 28 00:56:54.191437 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:56:54.194045 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:56:54.196388 systemd-logind[1566]: Removed session 7. Jan 28 00:56:54.241803 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 41830 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:54.246621 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:54.254296 systemd-logind[1566]: New session 8 of user core. Jan 28 00:56:54.272349 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:56:54.338059 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:56:54.338619 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:56:54.345029 sudo[1761]: pam_unix(sudo:session): session closed for user root Jan 28 00:56:54.355599 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 00:56:54.356135 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:56:54.382141 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 00:56:54.385287 auditctl[1764]: No rules Jan 28 00:56:54.387144 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:56:54.387777 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 00:56:54.391092 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:56:54.465366 augenrules[1783]: No rules Jan 28 00:56:54.467588 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:56:54.469185 sudo[1760]: pam_unix(sudo:session): session closed for user root Jan 28 00:56:54.471760 sshd[1753]: pam_unix(sshd:session): session closed for user core Jan 28 00:56:54.486097 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:41844.service - OpenSSH per-connection server daemon (10.0.0.1:41844). Jan 28 00:56:54.486996 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:41830.service: Deactivated successfully. Jan 28 00:56:54.489999 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:56:54.490937 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:56:54.493108 systemd-logind[1566]: Removed session 8. Jan 28 00:56:54.523451 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 41844 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:56:54.525809 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:56:54.532571 systemd-logind[1566]: New session 9 of user core. Jan 28 00:56:54.547868 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:56:54.611100 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:56:54.611816 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:56:55.272813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:56:55.292080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:56:56.696123 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:56:56.706352 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:56:56.711285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:56:56.772289 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:56:57.306561 kubelet[1827]: E0128 00:56:57.306117 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:56:57.315227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:56:57.315811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:56:59.003903 dockerd[1821]: time="2026-01-28T00:56:59.002173662Z" level=info msg="Starting up" Jan 28 00:56:59.510145 dockerd[1821]: time="2026-01-28T00:56:59.509886302Z" level=info msg="Loading containers: start." Jan 28 00:56:59.883836 kernel: Initializing XFRM netlink socket Jan 28 00:57:00.262208 systemd-networkd[1249]: docker0: Link UP Jan 28 00:57:00.331452 dockerd[1821]: time="2026-01-28T00:57:00.320399883Z" level=info msg="Loading containers: done." Jan 28 00:57:00.517106 dockerd[1821]: time="2026-01-28T00:57:00.516881446Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:57:00.517640 dockerd[1821]: time="2026-01-28T00:57:00.517188810Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 00:57:00.517640 dockerd[1821]: time="2026-01-28T00:57:00.517527162Z" level=info msg="Daemon has completed initialization" Jan 28 00:57:00.818561 dockerd[1821]: time="2026-01-28T00:57:00.817890576Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:57:00.819046 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:57:03.147249 containerd[1590]: time="2026-01-28T00:57:03.146676860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:57:04.016324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097546440.mount: Deactivated successfully. Jan 28 00:57:07.570018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:57:07.586201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:08.496075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:08.669560 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:09.498477 kubelet[2057]: E0128 00:57:09.498059 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:09.502895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:09.504295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:10.856644 containerd[1590]: time="2026-01-28T00:57:10.855362146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:10.860526 containerd[1590]: time="2026-01-28T00:57:10.860246072Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 00:57:10.873426 containerd[1590]: time="2026-01-28T00:57:10.872411044Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:10.881446 containerd[1590]: time="2026-01-28T00:57:10.881324082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:10.884907 containerd[1590]: time="2026-01-28T00:57:10.883861283Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 7.736716882s" Jan 28 00:57:10.884907 containerd[1590]: time="2026-01-28T00:57:10.883987118Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 00:57:10.892024 containerd[1590]: time="2026-01-28T00:57:10.891663278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:57:14.162936 containerd[1590]: time="2026-01-28T00:57:14.162410510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:14.170337 containerd[1590]: time="2026-01-28T00:57:14.170103402Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 00:57:14.173781 containerd[1590]: time="2026-01-28T00:57:14.173630043Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:14.180464 containerd[1590]: time="2026-01-28T00:57:14.180324579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:14.183806 containerd[1590]: time="2026-01-28T00:57:14.182917286Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.291109478s" Jan 28 00:57:14.184065 containerd[1590]: time="2026-01-28T00:57:14.183840640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 00:57:14.186634 containerd[1590]: time="2026-01-28T00:57:14.186512584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:57:16.763504 containerd[1590]: time="2026-01-28T00:57:16.763278490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:16.765768 containerd[1590]: time="2026-01-28T00:57:16.765355397Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 00:57:16.765768 containerd[1590]: time="2026-01-28T00:57:16.765608076Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:16.769761 containerd[1590]: time="2026-01-28T00:57:16.769643756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:16.772298 containerd[1590]: time="2026-01-28T00:57:16.772158999Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.585606732s" Jan 28 00:57:16.772298 containerd[1590]: time="2026-01-28T00:57:16.772253069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 00:57:16.774983 containerd[1590]: time="2026-01-28T00:57:16.774905183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:57:20.142235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:57:20.171597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:21.513413 update_engine[1576]: I20260128 00:57:21.506407 1576 update_attempter.cc:509] Updating boot flags... Jan 28 00:57:22.168740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2090) Jan 28 00:57:22.201103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:22.234435 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:22.271770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2089) Jan 28 00:57:22.508390 kubelet[2103]: E0128 00:57:22.507385 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:22.513649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:22.514192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:22.767003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067113274.mount: Deactivated successfully. Jan 28 00:57:24.610450 containerd[1590]: time="2026-01-28T00:57:24.610157982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:24.614990 containerd[1590]: time="2026-01-28T00:57:24.614786622Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 00:57:24.618940 containerd[1590]: time="2026-01-28T00:57:24.618839572Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:24.644634 containerd[1590]: time="2026-01-28T00:57:24.644403691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:24.646808 containerd[1590]: time="2026-01-28T00:57:24.646504174Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 7.871562346s" Jan 28 00:57:24.646906 containerd[1590]: time="2026-01-28T00:57:24.646832988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 00:57:24.651301 containerd[1590]: time="2026-01-28T00:57:24.651212750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:57:26.118950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009858625.mount: Deactivated successfully. Jan 28 00:57:28.500057 containerd[1590]: time="2026-01-28T00:57:28.499863178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:28.502025 containerd[1590]: time="2026-01-28T00:57:28.500535661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 00:57:28.502360 containerd[1590]: time="2026-01-28T00:57:28.502265219Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:28.506905 containerd[1590]: time="2026-01-28T00:57:28.506785379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:28.509739 containerd[1590]: time="2026-01-28T00:57:28.509588237Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.858297424s" Jan 28 00:57:28.509812 containerd[1590]: time="2026-01-28T00:57:28.509746700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 00:57:28.512412 containerd[1590]: time="2026-01-28T00:57:28.512094691Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:57:29.068775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714785099.mount: Deactivated successfully. Jan 28 00:57:29.075764 containerd[1590]: time="2026-01-28T00:57:29.075651952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:29.076831 containerd[1590]: time="2026-01-28T00:57:29.076754821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 00:57:29.078465 containerd[1590]: time="2026-01-28T00:57:29.078405466Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:29.083051 containerd[1590]: time="2026-01-28T00:57:29.082962616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:29.084159 containerd[1590]: time="2026-01-28T00:57:29.084087597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 571.948855ms" Jan 28 00:57:29.084159 containerd[1590]: time="2026-01-28T00:57:29.084131979Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 00:57:29.086181 containerd[1590]: time="2026-01-28T00:57:29.086034436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:57:29.697579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826361428.mount: Deactivated successfully. Jan 28 00:57:32.638352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 00:57:32.652156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:32.873935 containerd[1590]: time="2026-01-28T00:57:32.873549976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:32.886179 containerd[1590]: time="2026-01-28T00:57:32.885995629Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 00:57:32.898824 containerd[1590]: time="2026-01-28T00:57:32.896551771Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:32.916762 containerd[1590]: time="2026-01-28T00:57:32.916580962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:57:32.919143 containerd[1590]: time="2026-01-28T00:57:32.919063398Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.832988797s" Jan 28 00:57:32.919143 containerd[1590]: time="2026-01-28T00:57:32.919123109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 00:57:33.034046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:33.043056 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:57:33.249158 kubelet[2251]: E0128 00:57:33.247974 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:57:33.253503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:57:33.253870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:57:37.274953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:37.288084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:37.325778 systemd[1]: Reloading requested from client PID 2282 ('systemctl') (unit session-9.scope)... Jan 28 00:57:37.325861 systemd[1]: Reloading... Jan 28 00:57:37.459798 zram_generator::config[2321]: No configuration found. Jan 28 00:57:37.739818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:57:37.851770 systemd[1]: Reloading finished in 525 ms. Jan 28 00:57:37.916987 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:57:37.917168 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:57:37.917898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:37.922969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:38.256436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:38.269417 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:57:38.338867 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:57:38.338867 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:57:38.338867 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:57:38.339579 kubelet[2382]: I0128 00:57:38.339108 2382 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:57:38.973105 kubelet[2382]: I0128 00:57:38.973037 2382 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:57:38.973105 kubelet[2382]: I0128 00:57:38.973086 2382 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:57:38.973744 kubelet[2382]: I0128 00:57:38.973668 2382 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:57:39.015243 kubelet[2382]: E0128 00:57:39.015153 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:39.016403 kubelet[2382]: I0128 00:57:39.016343 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:57:39.035313 kubelet[2382]: E0128 00:57:39.035242 2382 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:57:39.035313 kubelet[2382]: I0128 00:57:39.035307 2382 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 00:57:39.046721 kubelet[2382]: I0128 00:57:39.046647 2382 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:57:39.048943 kubelet[2382]: I0128 00:57:39.048857 2382 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:57:39.049408 kubelet[2382]: I0128 00:57:39.048928 2382 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 00:57:39.049950 kubelet[2382]: I0128 00:57:39.049440 2382 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:57:39.049950 kubelet[2382]: I0128 00:57:39.049458 2382 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:57:39.049950 kubelet[2382]: I0128 00:57:39.049830 2382 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:57:39.054734 kubelet[2382]: I0128 00:57:39.054615 2382 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:57:39.054734 kubelet[2382]: I0128 00:57:39.054669 2382 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:57:39.054829 kubelet[2382]: I0128 00:57:39.054746 2382 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:57:39.054829 kubelet[2382]: I0128 00:57:39.054763 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:57:39.060643 kubelet[2382]: W0128 00:57:39.060063 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:39.060643 kubelet[2382]: E0128 00:57:39.060124 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:39.060643 kubelet[2382]: W0128 00:57:39.060500 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:39.060643 kubelet[2382]: E0128 00:57:39.060534 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:39.061832 kubelet[2382]: I0128 00:57:39.061667 2382 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:57:39.062589 kubelet[2382]: I0128 00:57:39.062491 2382 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:57:39.100225 kubelet[2382]: W0128 00:57:39.091406 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:57:39.131672 kubelet[2382]: I0128 00:57:39.131226 2382 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:57:39.131672 kubelet[2382]: I0128 00:57:39.131768 2382 server.go:1287] "Started kubelet" Jan 28 00:57:39.135338 kubelet[2382]: I0128 00:57:39.134851 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:57:39.136467 kubelet[2382]: I0128 00:57:39.136029 2382 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:57:39.136467 kubelet[2382]: I0128 00:57:39.136318 2382 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:57:39.137449 kubelet[2382]: I0128 00:57:39.137404 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:57:39.140779 kubelet[2382]: I0128 00:57:39.137580 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:57:39.140779 kubelet[2382]: I0128 00:57:39.138070 2382 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:57:39.140779 kubelet[2382]: E0128 00:57:39.139577 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:57:39.140779 kubelet[2382]: I0128 00:57:39.139733 2382 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:57:39.140779 kubelet[2382]: I0128 00:57:39.140107 2382 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:57:39.140779 kubelet[2382]: I0128 00:57:39.140254 2382 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:57:39.140779 kubelet[2382]: W0128 00:57:39.140620 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:39.140993 kubelet[2382]: E0128 00:57:39.140674 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:39.142074 kubelet[2382]: I0128 00:57:39.142050 2382 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:57:39.142906 kubelet[2382]: E0128 00:57:39.142676 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Jan 28 00:57:39.143591 kubelet[2382]: I0128 00:57:39.143570 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:57:39.145465 kubelet[2382]: E0128 00:57:39.145304 2382 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:57:39.145827 kubelet[2382]: I0128 00:57:39.145787 2382 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:57:39.157774 kubelet[2382]: E0128 00:57:39.144771 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ebf22a59e920f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 00:57:39.131609615 +0000 UTC m=+0.853611895,LastTimestamp:2026-01-28 00:57:39.131609615 +0000 UTC m=+0.853611895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 00:57:39.243240 kubelet[2382]: E0128 00:57:39.240575 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 00:57:39.252942 kubelet[2382]: I0128 00:57:39.252900 2382 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:57:39.252942 kubelet[2382]: I0128 00:57:39.252929 2382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:57:39.252942 kubelet[2382]: I0128 00:57:39.252949 2382 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:57:39.255617 kubelet[2382]: I0128 00:57:39.255505 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:57:39.258341 kubelet[2382]: I0128 00:57:39.258280 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:57:39.258501 kubelet[2382]: I0128 00:57:39.258476 2382 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:57:39.258585 kubelet[2382]: I0128 00:57:39.258558 2382 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:57:39.258585 kubelet[2382]: I0128 00:57:39.258584 2382 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:57:39.258825 kubelet[2382]: E0128 00:57:39.258746 2382 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:57:39.259216 kubelet[2382]: W0128 00:57:39.259149 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:39.259255 kubelet[2382]: E0128 00:57:39.259226 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:39.318259 kubelet[2382]: I0128 00:57:39.318151 2382 policy_none.go:49] "None policy: Start" Jan 28 00:57:39.318259 kubelet[2382]: I0128 00:57:39.318258 2382 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:57:39.318496 kubelet[2382]: I0128 00:57:39.318323 2382 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:57:39.331209 kubelet[2382]: I0128 00:57:39.331153 2382 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:57:39.331532 kubelet[2382]: I0128 00:57:39.331500 2382 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:57:39.331629 kubelet[2382]: I0128 00:57:39.331535 2382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:57:39.332877 kubelet[2382]: I0128 00:57:39.332835 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:57:39.333955 kubelet[2382]: E0128 00:57:39.333916 2382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:57:39.334065 kubelet[2382]: E0128 00:57:39.333997 2382 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 00:57:39.344423 kubelet[2382]: E0128 00:57:39.344269 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Jan 28 00:57:39.367909 kubelet[2382]: E0128 00:57:39.367229 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:39.369090 kubelet[2382]: E0128 00:57:39.369031 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:39.370539 kubelet[2382]: E0128 00:57:39.370488 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:39.434568 kubelet[2382]: I0128 00:57:39.434473 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:57:39.435112 kubelet[2382]: E0128 00:57:39.435058 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Jan 28 00:57:39.443227 kubelet[2382]: I0128 00:57:39.442851 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:39.443227 kubelet[2382]: I0128 00:57:39.442930 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:39.443227 kubelet[2382]: I0128 00:57:39.442969 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5f658534deed5fc1695a3408a404730-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5f658534deed5fc1695a3408a404730\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:39.443227 kubelet[2382]: I0128 00:57:39.443003 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5f658534deed5fc1695a3408a404730-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5f658534deed5fc1695a3408a404730\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:39.443227 kubelet[2382]: I0128 00:57:39.443034 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:39.443539 kubelet[2382]: I0128 00:57:39.443060 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:39.443539 kubelet[2382]: I0128 00:57:39.443084 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5f658534deed5fc1695a3408a404730-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5f658534deed5fc1695a3408a404730\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:39.443539 kubelet[2382]: I0128 00:57:39.443113 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:39.443539 kubelet[2382]: I0128 00:57:39.443143 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:39.637784 kubelet[2382]: I0128 00:57:39.637636 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:57:39.638224 kubelet[2382]: E0128 00:57:39.638184 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Jan 28 00:57:39.668922 kubelet[2382]: E0128 00:57:39.668816 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:39.669504 kubelet[2382]: E0128 00:57:39.669465 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:39.670012 containerd[1590]: time="2026-01-28T00:57:39.669967198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5f658534deed5fc1695a3408a404730,Namespace:kube-system,Attempt:0,}" Jan 28 00:57:39.670664 containerd[1590]: time="2026-01-28T00:57:39.670391702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 00:57:39.672016 kubelet[2382]: E0128 00:57:39.671987 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:39.672543 containerd[1590]: time="2026-01-28T00:57:39.672501495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 00:57:39.747016 kubelet[2382]: E0128 00:57:39.746893 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Jan 28 00:57:40.045489 kubelet[2382]: I0128 00:57:40.045095 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:57:40.045940 kubelet[2382]: E0128 00:57:40.045801 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Jan 28 00:57:40.131162 kubelet[2382]: W0128 00:57:40.130795 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:40.131162 kubelet[2382]: E0128 00:57:40.130963 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:40.142887 kubelet[2382]: E0128 00:57:40.142675 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ebf22a59e920f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 00:57:39.131609615 +0000 UTC m=+0.853611895,LastTimestamp:2026-01-28 00:57:39.131609615 +0000 UTC m=+0.853611895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 00:57:40.275103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52478876.mount: Deactivated successfully. Jan 28 00:57:40.282631 containerd[1590]: time="2026-01-28T00:57:40.282550519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:57:40.285536 containerd[1590]: time="2026-01-28T00:57:40.285471052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 00:57:40.286992 containerd[1590]: time="2026-01-28T00:57:40.286931225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:57:40.288400 containerd[1590]: time="2026-01-28T00:57:40.288285771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:57:40.289755 containerd[1590]: time="2026-01-28T00:57:40.289536044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:57:40.291128 containerd[1590]: time="2026-01-28T00:57:40.291054497Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:57:40.292522 containerd[1590]: time="2026-01-28T00:57:40.292429009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:57:40.292580 kubelet[2382]: W0128 00:57:40.292456 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:40.292580 kubelet[2382]: E0128 00:57:40.292556 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:40.294646 containerd[1590]: time="2026-01-28T00:57:40.294569993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:57:40.297784 containerd[1590]: time="2026-01-28T00:57:40.297619255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.05265ms" Jan 28 00:57:40.300805 containerd[1590]: time="2026-01-28T00:57:40.300757526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.235753ms" Jan 28 00:57:40.302365 containerd[1590]: time="2026-01-28T00:57:40.302285110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.210311ms" Jan 28 00:57:40.442651 kubelet[2382]: W0128 00:57:40.442217 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:40.442651 kubelet[2382]: E0128 00:57:40.442495 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:40.550559 kubelet[2382]: W0128 00:57:40.549254 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Jan 28 00:57:40.550559 kubelet[2382]: E0128 00:57:40.549485 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:40.550559 kubelet[2382]: E0128 00:57:40.549179 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Jan 28 00:57:40.790875 containerd[1590]: time="2026-01-28T00:57:40.775991550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:57:40.790875 containerd[1590]: time="2026-01-28T00:57:40.776209044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:57:40.790875 containerd[1590]: time="2026-01-28T00:57:40.776246313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:40.790875 containerd[1590]: time="2026-01-28T00:57:40.776837841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:40.893638 containerd[1590]: time="2026-01-28T00:57:40.893309771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:57:40.893638 containerd[1590]: time="2026-01-28T00:57:40.893507468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:57:40.893638 containerd[1590]: time="2026-01-28T00:57:40.893530731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:40.893993 containerd[1590]: time="2026-01-28T00:57:40.893835727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:40.896119 kubelet[2382]: I0128 00:57:40.895921 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:57:40.896621 kubelet[2382]: E0128 00:57:40.896539 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Jan 28 00:57:40.923675 containerd[1590]: time="2026-01-28T00:57:40.921567439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:57:40.923675 containerd[1590]: time="2026-01-28T00:57:40.921649500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:57:40.923675 containerd[1590]: time="2026-01-28T00:57:40.921668806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:40.924454 containerd[1590]: time="2026-01-28T00:57:40.921959968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:41.014907 containerd[1590]: time="2026-01-28T00:57:41.014806257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d28fc2ed1adc248894e339e5e593ecb6459b3ae9a13ce8fa6eba0bcc5a0c4e2b\"" Jan 28 00:57:41.017147 kubelet[2382]: E0128 00:57:41.016793 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:41.040580 containerd[1590]: time="2026-01-28T00:57:41.040510021Z" level=info msg="CreateContainer within sandbox \"d28fc2ed1adc248894e339e5e593ecb6459b3ae9a13ce8fa6eba0bcc5a0c4e2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:57:41.074042 containerd[1590]: time="2026-01-28T00:57:41.073955186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"267b7c6a1d1e0161c9a60c973dd8108812bc21919bcac5b541b75520b9d00c50\"" Jan 28 00:57:41.074922 kubelet[2382]: E0128 00:57:41.074851 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:41.075830 containerd[1590]: time="2026-01-28T00:57:41.075803577Z" level=info msg="CreateContainer within sandbox \"d28fc2ed1adc248894e339e5e593ecb6459b3ae9a13ce8fa6eba0bcc5a0c4e2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37ef368663a9edef0671fef7df601d7e55e7ac6a56e9932d7137d2c11c17a552\"" Jan 28 00:57:41.076759 containerd[1590]: time="2026-01-28T00:57:41.076524135Z" level=info msg="StartContainer for \"37ef368663a9edef0671fef7df601d7e55e7ac6a56e9932d7137d2c11c17a552\"" Jan 28 00:57:41.078600 containerd[1590]: time="2026-01-28T00:57:41.078536611Z" level=info msg="CreateContainer within sandbox \"267b7c6a1d1e0161c9a60c973dd8108812bc21919bcac5b541b75520b9d00c50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:57:41.091642 containerd[1590]: time="2026-01-28T00:57:41.091585825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d5f658534deed5fc1695a3408a404730,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29d0d25157ccab82126c4f4f813541b458e61cc5991c8d67ddd5992a8c65c96\"" Jan 28 00:57:41.093682 kubelet[2382]: E0128 00:57:41.093628 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:41.095725 containerd[1590]: time="2026-01-28T00:57:41.095652531Z" level=info msg="CreateContainer within sandbox \"d29d0d25157ccab82126c4f4f813541b458e61cc5991c8d67ddd5992a8c65c96\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:57:41.099103 containerd[1590]: time="2026-01-28T00:57:41.099049199Z" level=info msg="CreateContainer within sandbox \"267b7c6a1d1e0161c9a60c973dd8108812bc21919bcac5b541b75520b9d00c50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e92b0d60b4fb67f55534ef4858bb5e0d608161fe7e084d873048e93c29949a4\"" Jan 28 00:57:41.099743 containerd[1590]: time="2026-01-28T00:57:41.099657479Z" level=info msg="StartContainer for \"8e92b0d60b4fb67f55534ef4858bb5e0d608161fe7e084d873048e93c29949a4\"" Jan 28 00:57:41.117753 containerd[1590]: time="2026-01-28T00:57:41.115278909Z" level=info msg="CreateContainer within sandbox \"d29d0d25157ccab82126c4f4f813541b458e61cc5991c8d67ddd5992a8c65c96\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7b1e9dee3f958fb33ba6f0fa06fdfecf19adc6debe127809abfd1ea377183af\"" Jan 28 00:57:41.117753 containerd[1590]: time="2026-01-28T00:57:41.116068605Z" level=info msg="StartContainer for \"a7b1e9dee3f958fb33ba6f0fa06fdfecf19adc6debe127809abfd1ea377183af\"" Jan 28 00:57:41.215787 kubelet[2382]: E0128 00:57:41.213262 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:57:41.274416 containerd[1590]: time="2026-01-28T00:57:41.274321140Z" level=info msg="StartContainer for \"8e92b0d60b4fb67f55534ef4858bb5e0d608161fe7e084d873048e93c29949a4\" returns successfully" Jan 28 00:57:41.300940 containerd[1590]: time="2026-01-28T00:57:41.300880114Z" level=info msg="StartContainer for \"a7b1e9dee3f958fb33ba6f0fa06fdfecf19adc6debe127809abfd1ea377183af\" returns successfully" Jan 28 00:57:41.301117 containerd[1590]: time="2026-01-28T00:57:41.300958459Z" level=info msg="StartContainer for \"37ef368663a9edef0671fef7df601d7e55e7ac6a56e9932d7137d2c11c17a552\" returns successfully" Jan 28 00:57:41.308141 kubelet[2382]: E0128 00:57:41.308022 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:41.308286 kubelet[2382]: E0128 00:57:41.308238 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:41.312740 kubelet[2382]: E0128 00:57:41.312116 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:41.312740 kubelet[2382]: E0128 00:57:41.312294 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:42.346106 kubelet[2382]: E0128 00:57:42.345222 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:42.346106 kubelet[2382]: E0128 00:57:42.345644 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:42.347853 kubelet[2382]: E0128 00:57:42.347412 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:42.347853 kubelet[2382]: E0128 00:57:42.347609 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:42.502090 kubelet[2382]: I0128 00:57:42.502025 2382 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:57:43.383588 kubelet[2382]: E0128 00:57:43.383427 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:43.383588 kubelet[2382]: E0128 00:57:43.383418 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:43.383588 kubelet[2382]: E0128 00:57:43.383673 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:43.385224 kubelet[2382]: E0128 00:57:43.383768 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:44.393151 kubelet[2382]: E0128 00:57:44.392857 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 00:57:44.393151 kubelet[2382]: E0128 00:57:44.393231 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:44.637094 kubelet[2382]: I0128 00:57:44.636903 2382 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 00:57:44.637094 kubelet[2382]: E0128 00:57:44.636978 2382 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 00:57:44.645255 kubelet[2382]: I0128 00:57:44.643612 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:44.667662 kubelet[2382]: E0128 00:57:44.662951 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:44.667662 kubelet[2382]: I0128 00:57:44.662994 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:44.670066 kubelet[2382]: E0128 00:57:44.669816 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:44.670066 kubelet[2382]: I0128 00:57:44.669857 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:44.672132 kubelet[2382]: E0128 00:57:44.672066 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:45.089994 kubelet[2382]: I0128 00:57:45.089274 2382 apiserver.go:52] "Watching apiserver" Jan 28 00:57:45.140735 kubelet[2382]: I0128 00:57:45.140572 2382 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:57:47.794871 systemd[1]: Reloading requested from client PID 2666 ('systemctl') (unit session-9.scope)... Jan 28 00:57:47.794914 systemd[1]: Reloading... Jan 28 00:57:48.061441 zram_generator::config[2708]: No configuration found. Jan 28 00:57:48.473431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:57:48.647575 systemd[1]: Reloading finished in 852 ms. Jan 28 00:57:48.705803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:48.706482 kubelet[2382]: E0128 00:57:48.705800 2382 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.188ebf22a59e920f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 00:57:39.131609615 +0000 UTC m=+0.853611895,LastTimestamp:2026-01-28 00:57:39.131609615 +0000 UTC m=+0.853611895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 00:57:48.745276 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:57:48.746202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:48.758069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:57:49.001670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:57:49.015466 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:57:49.120971 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:57:49.126177 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:57:49.126177 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:57:49.126177 kubelet[2760]: I0128 00:57:49.125185 2760 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:57:49.158932 kubelet[2760]: I0128 00:57:49.158407 2760 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:57:49.158932 kubelet[2760]: I0128 00:57:49.158449 2760 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:57:49.158932 kubelet[2760]: I0128 00:57:49.158888 2760 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:57:49.161008 kubelet[2760]: I0128 00:57:49.160969 2760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 00:57:49.180837 kubelet[2760]: I0128 00:57:49.180343 2760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:57:49.199630 kubelet[2760]: E0128 00:57:49.199479 2760 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:57:49.199630 kubelet[2760]: I0128 00:57:49.199615 2760 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 00:57:49.215187 kubelet[2760]: I0128 00:57:49.215074 2760 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:57:49.216351 kubelet[2760]: I0128 00:57:49.216238 2760 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:57:49.216542 kubelet[2760]: I0128 00:57:49.216331 2760 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 00:57:49.216542 kubelet[2760]: I0128 00:57:49.216521 2760 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:57:49.216542 kubelet[2760]: I0128 00:57:49.216533 2760 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:57:49.216784 kubelet[2760]: I0128 00:57:49.216585 2760 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:57:49.217089 kubelet[2760]: I0128 00:57:49.216998 2760 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:57:49.217089 kubelet[2760]: I0128 00:57:49.217039 2760 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:57:49.217089 kubelet[2760]: I0128 00:57:49.217058 2760 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:57:49.217089 kubelet[2760]: I0128 00:57:49.217069 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:57:49.218878 kubelet[2760]: I0128 00:57:49.218848 2760 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:57:49.221906 kubelet[2760]: I0128 00:57:49.219284 2760 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:57:49.222315 kubelet[2760]: I0128 00:57:49.222239 2760 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:57:49.222410 kubelet[2760]: I0128 00:57:49.222363 2760 server.go:1287] "Started kubelet" Jan 28 00:57:49.228772 kubelet[2760]: I0128 00:57:49.227533 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:57:49.228772 kubelet[2760]: I0128 00:57:49.227578 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:57:49.228772 kubelet[2760]: I0128 00:57:49.227938 2760 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:57:49.228772 kubelet[2760]: I0128 00:57:49.227998 2760 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:57:49.232363 kubelet[2760]: I0128 00:57:49.230537 2760 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:57:49.236650 kubelet[2760]: I0128 00:57:49.236568 2760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:57:49.239109 kubelet[2760]: I0128 00:57:49.239092 2760 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:57:49.239534 kubelet[2760]: I0128 00:57:49.239520 2760 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:57:49.240182 kubelet[2760]: I0128 00:57:49.240169 2760 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:57:49.244656 kubelet[2760]: I0128 00:57:49.244563 2760 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:57:49.244867 kubelet[2760]: I0128 00:57:49.244777 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:57:49.248559 kubelet[2760]: E0128 00:57:49.248471 2760 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:57:49.249426 kubelet[2760]: I0128 00:57:49.249358 2760 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:57:49.262058 kubelet[2760]: I0128 00:57:49.259976 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:57:49.262430 kubelet[2760]: I0128 00:57:49.262339 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:57:49.262430 kubelet[2760]: I0128 00:57:49.262394 2760 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:57:49.262430 kubelet[2760]: I0128 00:57:49.262420 2760 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:57:49.262430 kubelet[2760]: I0128 00:57:49.262431 2760 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:57:49.262631 kubelet[2760]: E0128 00:57:49.262506 2760 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:57:49.348246 kubelet[2760]: I0128 00:57:49.348213 2760 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:57:49.348463 kubelet[2760]: I0128 00:57:49.348447 2760 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:57:49.348519 kubelet[2760]: I0128 00:57:49.348510 2760 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:57:49.348766 kubelet[2760]: I0128 00:57:49.348749 2760 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:57:49.348848 kubelet[2760]: I0128 00:57:49.348813 2760 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:57:49.348936 kubelet[2760]: I0128 00:57:49.348923 2760 policy_none.go:49] "None policy: Start" Jan 28 00:57:49.349046 kubelet[2760]: I0128 00:57:49.349030 2760 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:57:49.349129 kubelet[2760]: I0128 00:57:49.349114 2760 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:57:49.349402 kubelet[2760]: I0128 00:57:49.349379 2760 state_mem.go:75] "Updated machine memory state" Jan 28 00:57:49.351473 kubelet[2760]: I0128 00:57:49.351453 2760 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:57:49.351756 kubelet[2760]: I0128 00:57:49.351742 2760 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:57:49.351828 kubelet[2760]: I0128 00:57:49.351804 2760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:57:49.352114 kubelet[2760]: I0128 00:57:49.352051 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:57:49.355502 kubelet[2760]: E0128 00:57:49.355436 2760 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:57:49.363058 kubelet[2760]: I0128 00:57:49.363020 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:49.363637 kubelet[2760]: I0128 00:57:49.363211 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:49.363753 kubelet[2760]: I0128 00:57:49.363037 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:49.471410 kubelet[2760]: I0128 00:57:49.471130 2760 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 00:57:49.488258 kubelet[2760]: I0128 00:57:49.486908 2760 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 00:57:49.488258 kubelet[2760]: I0128 00:57:49.487120 2760 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 00:57:49.543882 kubelet[2760]: I0128 00:57:49.543357 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5f658534deed5fc1695a3408a404730-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5f658534deed5fc1695a3408a404730\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:49.543882 kubelet[2760]: I0128 00:57:49.543425 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:49.543882 kubelet[2760]: I0128 00:57:49.543459 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:49.543882 kubelet[2760]: I0128 00:57:49.543501 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:49.543882 kubelet[2760]: I0128 00:57:49.543530 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5f658534deed5fc1695a3408a404730-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d5f658534deed5fc1695a3408a404730\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:49.544401 kubelet[2760]: I0128 00:57:49.543557 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5f658534deed5fc1695a3408a404730-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d5f658534deed5fc1695a3408a404730\") " pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:49.544401 kubelet[2760]: I0128 00:57:49.543583 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:49.544401 kubelet[2760]: I0128 00:57:49.543604 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:49.544401 kubelet[2760]: I0128 00:57:49.543627 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:49.683146 kubelet[2760]: E0128 00:57:49.682390 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:49.683146 kubelet[2760]: E0128 00:57:49.682399 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:49.687010 kubelet[2760]: E0128 00:57:49.686969 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:50.218812 kubelet[2760]: I0128 00:57:50.218673 2760 apiserver.go:52] "Watching apiserver" Jan 28 00:57:50.276436 kubelet[2760]: I0128 00:57:50.276011 2760 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:57:50.298640 kubelet[2760]: I0128 00:57:50.298609 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:50.299976 kubelet[2760]: I0128 00:57:50.299959 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:50.301500 kubelet[2760]: I0128 00:57:50.301453 2760 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:50.310030 kubelet[2760]: E0128 00:57:50.309885 2760 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 28 00:57:50.311320 kubelet[2760]: E0128 00:57:50.311159 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:50.312918 kubelet[2760]: E0128 00:57:50.312846 2760 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 00:57:50.313259 kubelet[2760]: E0128 00:57:50.313019 2760 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 00:57:50.313611 kubelet[2760]: E0128 00:57:50.313591 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:50.314063 kubelet[2760]: E0128 00:57:50.314022 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:50.373851 kubelet[2760]: I0128 00:57:50.373595 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.373553943 podStartE2EDuration="1.373553943s" podCreationTimestamp="2026-01-28 00:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:57:50.357956626 +0000 UTC m=+1.336108280" watchObservedRunningTime="2026-01-28 00:57:50.373553943 +0000 UTC m=+1.351705566" Jan 28 00:57:50.373851 kubelet[2760]: I0128 00:57:50.373750 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.373744939 podStartE2EDuration="1.373744939s" podCreationTimestamp="2026-01-28 00:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:57:50.373149233 +0000 UTC m=+1.351300866" watchObservedRunningTime="2026-01-28 00:57:50.373744939 +0000 UTC m=+1.351896562" Jan 28 00:57:50.417447 kubelet[2760]: I0128 00:57:50.417337 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.417276765 podStartE2EDuration="1.417276765s" podCreationTimestamp="2026-01-28 00:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:57:50.389076721 +0000 UTC m=+1.367228383" watchObservedRunningTime="2026-01-28 00:57:50.417276765 +0000 UTC m=+1.395428408" Jan 28 00:57:51.300984 kubelet[2760]: E0128 00:57:51.300804 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:51.300984 kubelet[2760]: E0128 00:57:51.300889 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:51.302543 kubelet[2760]: E0128 00:57:51.301124 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:52.304679 kubelet[2760]: E0128 00:57:52.303903 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:52.304679 kubelet[2760]: E0128 00:57:52.304125 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:53.305547 kubelet[2760]: E0128 00:57:53.305483 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:53.353519 kubelet[2760]: I0128 00:57:53.353469 2760 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:57:53.354267 containerd[1590]: time="2026-01-28T00:57:53.354112619Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:57:53.354953 kubelet[2760]: I0128 00:57:53.354407 2760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:57:54.122479 kubelet[2760]: I0128 00:57:54.122371 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f906b424-23cd-4541-b6ee-44c32642895e-kube-proxy\") pod \"kube-proxy-pd842\" (UID: \"f906b424-23cd-4541-b6ee-44c32642895e\") " pod="kube-system/kube-proxy-pd842" Jan 28 00:57:54.122479 kubelet[2760]: I0128 00:57:54.122446 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f906b424-23cd-4541-b6ee-44c32642895e-lib-modules\") pod \"kube-proxy-pd842\" (UID: \"f906b424-23cd-4541-b6ee-44c32642895e\") " pod="kube-system/kube-proxy-pd842" Jan 28 00:57:54.122678 kubelet[2760]: I0128 00:57:54.122515 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpmv\" (UniqueName: \"kubernetes.io/projected/f906b424-23cd-4541-b6ee-44c32642895e-kube-api-access-6zpmv\") pod \"kube-proxy-pd842\" (UID: \"f906b424-23cd-4541-b6ee-44c32642895e\") " pod="kube-system/kube-proxy-pd842" Jan 28 00:57:54.122678 kubelet[2760]: I0128 00:57:54.122570 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f906b424-23cd-4541-b6ee-44c32642895e-xtables-lock\") pod \"kube-proxy-pd842\" (UID: \"f906b424-23cd-4541-b6ee-44c32642895e\") " pod="kube-system/kube-proxy-pd842" Jan 28 00:57:54.370492 kubelet[2760]: E0128 00:57:54.370410 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:54.371312 containerd[1590]: time="2026-01-28T00:57:54.371242972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd842,Uid:f906b424-23cd-4541-b6ee-44c32642895e,Namespace:kube-system,Attempt:0,}" Jan 28 00:57:54.406241 containerd[1590]: time="2026-01-28T00:57:54.406071631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:57:54.406241 containerd[1590]: time="2026-01-28T00:57:54.406191735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:57:54.406241 containerd[1590]: time="2026-01-28T00:57:54.406212654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:54.407898 containerd[1590]: time="2026-01-28T00:57:54.407816201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:54.497388 containerd[1590]: time="2026-01-28T00:57:54.497209722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pd842,Uid:f906b424-23cd-4541-b6ee-44c32642895e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7f4040a60defaa34b94337bc301321112df405f2bfd79e77a05799e70bfd2e5\"" Jan 28 00:57:54.500104 kubelet[2760]: E0128 00:57:54.500066 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:54.506942 containerd[1590]: time="2026-01-28T00:57:54.506846180Z" level=info msg="CreateContainer within sandbox \"f7f4040a60defaa34b94337bc301321112df405f2bfd79e77a05799e70bfd2e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:57:54.532560 containerd[1590]: time="2026-01-28T00:57:54.532469355Z" level=info msg="CreateContainer within sandbox \"f7f4040a60defaa34b94337bc301321112df405f2bfd79e77a05799e70bfd2e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58a1990506839aa21acea67bff08d028c31e130b3378a8fc4344ed33e29e9cee\"" Jan 28 00:57:54.533526 containerd[1590]: time="2026-01-28T00:57:54.533404387Z" level=info msg="StartContainer for \"58a1990506839aa21acea67bff08d028c31e130b3378a8fc4344ed33e29e9cee\"" Jan 28 00:57:54.633558 kubelet[2760]: I0128 00:57:54.633371 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr7kk\" (UniqueName: \"kubernetes.io/projected/47690c79-bcb5-47ab-87bb-e77fde92e801-kube-api-access-jr7kk\") pod \"tigera-operator-7dcd859c48-pr9wd\" (UID: \"47690c79-bcb5-47ab-87bb-e77fde92e801\") " pod="tigera-operator/tigera-operator-7dcd859c48-pr9wd" Jan 28 00:57:54.633558 kubelet[2760]: I0128 00:57:54.633464 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47690c79-bcb5-47ab-87bb-e77fde92e801-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pr9wd\" (UID: \"47690c79-bcb5-47ab-87bb-e77fde92e801\") " pod="tigera-operator/tigera-operator-7dcd859c48-pr9wd" Jan 28 00:57:55.290857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842421589.mount: Deactivated successfully. Jan 28 00:57:55.454188 kubelet[2760]: E0128 00:57:55.432633 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:55.742926 containerd[1590]: time="2026-01-28T00:57:55.742450368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pr9wd,Uid:47690c79-bcb5-47ab-87bb-e77fde92e801,Namespace:tigera-operator,Attempt:0,}" Jan 28 00:57:55.798120 containerd[1590]: time="2026-01-28T00:57:55.797978489Z" level=info msg="StartContainer for \"58a1990506839aa21acea67bff08d028c31e130b3378a8fc4344ed33e29e9cee\" returns successfully" Jan 28 00:57:55.815619 containerd[1590]: time="2026-01-28T00:57:55.815388475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:57:55.816060 containerd[1590]: time="2026-01-28T00:57:55.815570223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:57:55.816060 containerd[1590]: time="2026-01-28T00:57:55.815631889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:55.816232 containerd[1590]: time="2026-01-28T00:57:55.816146026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:57:56.552097 containerd[1590]: time="2026-01-28T00:57:56.551975852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pr9wd,Uid:47690c79-bcb5-47ab-87bb-e77fde92e801,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89c415318794a6f0c7abb4b98c61019785f057968e7c69a8a1cddb4ad88e295a\"" Jan 28 00:57:56.557770 containerd[1590]: time="2026-01-28T00:57:56.557630862Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 00:57:56.562436 kubelet[2760]: E0128 00:57:56.562404 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:56.594480 kubelet[2760]: E0128 00:57:56.594444 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:56.706820 kubelet[2760]: I0128 00:57:56.705977 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pd842" podStartSLOduration=2.705770101 podStartE2EDuration="2.705770101s" podCreationTimestamp="2026-01-28 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:57:56.682962534 +0000 UTC m=+7.661114158" watchObservedRunningTime="2026-01-28 00:57:56.705770101 +0000 UTC m=+7.683921734" Jan 28 00:57:57.622985 kubelet[2760]: E0128 00:57:57.622941 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:58.363040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630603980.mount: Deactivated successfully. Jan 28 00:57:58.543565 kubelet[2760]: E0128 00:57:58.526529 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:57:58.679490 kubelet[2760]: E0128 00:57:58.679437 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:01.432935 kubelet[2760]: E0128 00:58:01.432418 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:02.431960 containerd[1590]: time="2026-01-28T00:58:02.415188020Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 00:58:02.491148 containerd[1590]: time="2026-01-28T00:58:02.417362320Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:02.958377 containerd[1590]: time="2026-01-28T00:58:02.958061842Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:02.965820 containerd[1590]: time="2026-01-28T00:58:02.965735601Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:02.967203 containerd[1590]: time="2026-01-28T00:58:02.967149588Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.409469775s" Jan 28 00:58:02.967345 containerd[1590]: time="2026-01-28T00:58:02.967221572Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 00:58:02.972021 containerd[1590]: time="2026-01-28T00:58:02.971788185Z" level=info msg="CreateContainer within sandbox \"89c415318794a6f0c7abb4b98c61019785f057968e7c69a8a1cddb4ad88e295a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 00:58:02.996110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196803484.mount: Deactivated successfully. Jan 28 00:58:02.997557 containerd[1590]: time="2026-01-28T00:58:02.997435554Z" level=info msg="CreateContainer within sandbox \"89c415318794a6f0c7abb4b98c61019785f057968e7c69a8a1cddb4ad88e295a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6454e7b2cc555550c992d8147a83b4444618dda3fd88c059266a98b5785e2929\"" Jan 28 00:58:02.998358 containerd[1590]: time="2026-01-28T00:58:02.998296517Z" level=info msg="StartContainer for \"6454e7b2cc555550c992d8147a83b4444618dda3fd88c059266a98b5785e2929\"" Jan 28 00:58:03.287454 containerd[1590]: time="2026-01-28T00:58:03.286543136Z" level=info msg="StartContainer for \"6454e7b2cc555550c992d8147a83b4444618dda3fd88c059266a98b5785e2929\" returns successfully" Jan 28 00:58:10.148948 sudo[1796]: pam_unix(sudo:session): session closed for user root Jan 28 00:58:10.166208 sshd[1789]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:10.175930 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:41844.service: Deactivated successfully. Jan 28 00:58:10.183797 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:58:10.184446 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:58:10.195522 systemd-logind[1566]: Removed session 9. Jan 28 00:58:15.220468 kubelet[2760]: I0128 00:58:15.219901 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pr9wd" podStartSLOduration=14.807127144 podStartE2EDuration="21.219501024s" podCreationTimestamp="2026-01-28 00:57:54 +0000 UTC" firstStartedPulling="2026-01-28 00:57:56.556818253 +0000 UTC m=+7.534969877" lastFinishedPulling="2026-01-28 00:58:02.969192134 +0000 UTC m=+13.947343757" observedRunningTime="2026-01-28 00:58:03.654763502 +0000 UTC m=+14.632915135" watchObservedRunningTime="2026-01-28 00:58:15.219501024 +0000 UTC m=+26.197652648" Jan 28 00:58:15.306508 kubelet[2760]: I0128 00:58:15.306380 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/08eddca9-1f60-4dda-854f-65a7d272ab92-typha-certs\") pod \"calico-typha-66998f4864-cqcld\" (UID: \"08eddca9-1f60-4dda-854f-65a7d272ab92\") " pod="calico-system/calico-typha-66998f4864-cqcld" Jan 28 00:58:15.306508 kubelet[2760]: I0128 00:58:15.306502 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08eddca9-1f60-4dda-854f-65a7d272ab92-tigera-ca-bundle\") pod \"calico-typha-66998f4864-cqcld\" (UID: \"08eddca9-1f60-4dda-854f-65a7d272ab92\") " pod="calico-system/calico-typha-66998f4864-cqcld" Jan 28 00:58:15.407406 kubelet[2760]: I0128 00:58:15.407258 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cg8\" (UniqueName: \"kubernetes.io/projected/08eddca9-1f60-4dda-854f-65a7d272ab92-kube-api-access-h5cg8\") pod \"calico-typha-66998f4864-cqcld\" (UID: \"08eddca9-1f60-4dda-854f-65a7d272ab92\") " pod="calico-system/calico-typha-66998f4864-cqcld" Jan 28 00:58:15.544666 kubelet[2760]: E0128 00:58:15.543662 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:15.544861 containerd[1590]: time="2026-01-28T00:58:15.544538010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66998f4864-cqcld,Uid:08eddca9-1f60-4dda-854f-65a7d272ab92,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:15.595525 kubelet[2760]: E0128 00:58:15.595391 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:15.606152 containerd[1590]: time="2026-01-28T00:58:15.605800602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:15.606152 containerd[1590]: time="2026-01-28T00:58:15.606117788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:15.607760 containerd[1590]: time="2026-01-28T00:58:15.607102068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:15.607760 containerd[1590]: time="2026-01-28T00:58:15.607252245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:15.608289 kubelet[2760]: I0128 00:58:15.608182 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-var-run-calico\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608289 kubelet[2760]: I0128 00:58:15.608239 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-var-lib-calico\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608289 kubelet[2760]: I0128 00:58:15.608259 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-node-certs\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608289 kubelet[2760]: I0128 00:58:15.608272 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-policysync\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608289 kubelet[2760]: I0128 00:58:15.608287 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-cni-net-dir\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608650 kubelet[2760]: I0128 00:58:15.608304 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-cni-log-dir\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608650 kubelet[2760]: I0128 00:58:15.608319 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-cni-bin-dir\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608650 kubelet[2760]: I0128 00:58:15.608332 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-xtables-lock\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608650 kubelet[2760]: I0128 00:58:15.608348 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfjp\" (UniqueName: \"kubernetes.io/projected/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-kube-api-access-9bfjp\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608650 kubelet[2760]: I0128 00:58:15.608365 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-flexvol-driver-host\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608944 kubelet[2760]: I0128 00:58:15.608403 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-lib-modules\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.608944 kubelet[2760]: I0128 00:58:15.608420 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2e938f-f2fe-43fb-8013-006b6eff4cd6-tigera-ca-bundle\") pod \"calico-node-ll5ql\" (UID: \"5c2e938f-f2fe-43fb-8013-006b6eff4cd6\") " pod="calico-system/calico-node-ll5ql" Jan 28 00:58:15.730377 kubelet[2760]: E0128 00:58:15.721592 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.732837 kubelet[2760]: W0128 00:58:15.732763 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.732939 kubelet[2760]: E0128 00:58:15.732895 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.733007 kubelet[2760]: I0128 00:58:15.732946 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca219588-36d1-44cb-b7f0-f29129c91014-registration-dir\") pod \"csi-node-driver-jxgdl\" (UID: \"ca219588-36d1-44cb-b7f0-f29129c91014\") " pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:15.733778 kubelet[2760]: E0128 00:58:15.733653 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.733845 kubelet[2760]: W0128 00:58:15.733811 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.734774 kubelet[2760]: E0128 00:58:15.733935 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.734774 kubelet[2760]: I0128 00:58:15.734060 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca219588-36d1-44cb-b7f0-f29129c91014-varrun\") pod \"csi-node-driver-jxgdl\" (UID: \"ca219588-36d1-44cb-b7f0-f29129c91014\") " pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:15.737090 kubelet[2760]: E0128 00:58:15.737048 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.737090 kubelet[2760]: W0128 00:58:15.737081 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.737455 kubelet[2760]: E0128 00:58:15.737394 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.737633 kubelet[2760]: E0128 00:58:15.737545 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.737633 kubelet[2760]: W0128 00:58:15.737616 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.737791 kubelet[2760]: E0128 00:58:15.737742 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.738211 kubelet[2760]: E0128 00:58:15.738096 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.738211 kubelet[2760]: W0128 00:58:15.738186 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.738330 kubelet[2760]: E0128 00:58:15.738303 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.738674 kubelet[2760]: E0128 00:58:15.738626 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.738674 kubelet[2760]: W0128 00:58:15.738656 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.738791 kubelet[2760]: E0128 00:58:15.738737 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.739091 kubelet[2760]: E0128 00:58:15.739065 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.739091 kubelet[2760]: W0128 00:58:15.739078 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.739204 kubelet[2760]: E0128 00:58:15.739105 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.739660 kubelet[2760]: E0128 00:58:15.739600 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.739660 kubelet[2760]: W0128 00:58:15.739631 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.739660 kubelet[2760]: E0128 00:58:15.739654 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.740070 kubelet[2760]: E0128 00:58:15.740011 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.740070 kubelet[2760]: W0128 00:58:15.740042 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.740070 kubelet[2760]: E0128 00:58:15.740068 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.740410 kubelet[2760]: E0128 00:58:15.740351 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.740410 kubelet[2760]: W0128 00:58:15.740379 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.740410 kubelet[2760]: E0128 00:58:15.740395 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.740410 kubelet[2760]: I0128 00:58:15.740414 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca219588-36d1-44cb-b7f0-f29129c91014-socket-dir\") pod \"csi-node-driver-jxgdl\" (UID: \"ca219588-36d1-44cb-b7f0-f29129c91014\") " pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:15.741037 kubelet[2760]: E0128 00:58:15.740807 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.741037 kubelet[2760]: W0128 00:58:15.740836 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.741037 kubelet[2760]: E0128 00:58:15.740847 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.741331 kubelet[2760]: E0128 00:58:15.741259 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.741331 kubelet[2760]: W0128 00:58:15.741286 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.741331 kubelet[2760]: E0128 00:58:15.741300 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.741768 kubelet[2760]: E0128 00:58:15.741664 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.741768 kubelet[2760]: W0128 00:58:15.741678 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.741768 kubelet[2760]: E0128 00:58:15.741751 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.742497 kubelet[2760]: E0128 00:58:15.742306 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.742497 kubelet[2760]: W0128 00:58:15.742325 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.742497 kubelet[2760]: E0128 00:58:15.742460 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.742952 kubelet[2760]: E0128 00:58:15.742678 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.742952 kubelet[2760]: W0128 00:58:15.742721 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.742952 kubelet[2760]: E0128 00:58:15.742873 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.744229 kubelet[2760]: E0128 00:58:15.743086 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.744229 kubelet[2760]: W0128 00:58:15.743098 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.744229 kubelet[2760]: E0128 00:58:15.743111 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.744229 kubelet[2760]: E0128 00:58:15.743383 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.744229 kubelet[2760]: W0128 00:58:15.743393 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.744229 kubelet[2760]: E0128 00:58:15.743405 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.744229 kubelet[2760]: E0128 00:58:15.743750 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.744229 kubelet[2760]: W0128 00:58:15.743760 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.744229 kubelet[2760]: E0128 00:58:15.743773 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.744782 kubelet[2760]: I0128 00:58:15.743788 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca219588-36d1-44cb-b7f0-f29129c91014-kubelet-dir\") pod \"csi-node-driver-jxgdl\" (UID: \"ca219588-36d1-44cb-b7f0-f29129c91014\") " pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:15.744782 kubelet[2760]: E0128 00:58:15.744093 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.744782 kubelet[2760]: W0128 00:58:15.744108 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.744782 kubelet[2760]: E0128 00:58:15.744168 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.744782 kubelet[2760]: E0128 00:58:15.744502 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.744782 kubelet[2760]: W0128 00:58:15.744512 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.744782 kubelet[2760]: E0128 00:58:15.744548 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.745087 kubelet[2760]: E0128 00:58:15.744927 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.745087 kubelet[2760]: W0128 00:58:15.744937 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.745087 kubelet[2760]: E0128 00:58:15.744962 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.745503 kubelet[2760]: E0128 00:58:15.745217 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.745503 kubelet[2760]: W0128 00:58:15.745231 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.745503 kubelet[2760]: E0128 00:58:15.745242 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.745782 kubelet[2760]: E0128 00:58:15.745659 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.745782 kubelet[2760]: W0128 00:58:15.745671 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.746894 kubelet[2760]: E0128 00:58:15.746853 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.747817 kubelet[2760]: E0128 00:58:15.747506 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.747817 kubelet[2760]: W0128 00:58:15.747534 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.747817 kubelet[2760]: E0128 00:58:15.747662 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.748133 kubelet[2760]: E0128 00:58:15.748103 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.748133 kubelet[2760]: W0128 00:58:15.748127 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.748313 containerd[1590]: time="2026-01-28T00:58:15.748203603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66998f4864-cqcld,Uid:08eddca9-1f60-4dda-854f-65a7d272ab92,Namespace:calico-system,Attempt:0,} returns sandbox id \"42f5e3b7d02997d54ef6d99ba30cee08bb2ecc58afebf9c79874df2564b4c5eb\"" Jan 28 00:58:15.748387 kubelet[2760]: E0128 00:58:15.748230 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.748758 kubelet[2760]: E0128 00:58:15.748654 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.748758 kubelet[2760]: W0128 00:58:15.748721 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.748917 kubelet[2760]: E0128 00:58:15.748887 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.749107 kubelet[2760]: E0128 00:58:15.749071 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.749107 kubelet[2760]: W0128 00:58:15.749092 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.749107 kubelet[2760]: E0128 00:58:15.749103 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.750148 kubelet[2760]: E0128 00:58:15.749512 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:15.750148 kubelet[2760]: E0128 00:58:15.749557 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.750148 kubelet[2760]: W0128 00:58:15.749600 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.750148 kubelet[2760]: E0128 00:58:15.749610 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.750738 kubelet[2760]: E0128 00:58:15.750644 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.750738 kubelet[2760]: W0128 00:58:15.750680 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.750893 containerd[1590]: time="2026-01-28T00:58:15.750879518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 00:58:15.751410 kubelet[2760]: E0128 00:58:15.751343 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.751603 kubelet[2760]: W0128 00:58:15.751455 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.751682 kubelet[2760]: E0128 00:58:15.751667 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.751818 kubelet[2760]: E0128 00:58:15.751548 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.752453 kubelet[2760]: E0128 00:58:15.752328 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.752453 kubelet[2760]: W0128 00:58:15.752354 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.752453 kubelet[2760]: E0128 00:58:15.752366 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.753504 kubelet[2760]: E0128 00:58:15.752904 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.753504 kubelet[2760]: W0128 00:58:15.752977 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.753504 kubelet[2760]: E0128 00:58:15.752998 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.753504 kubelet[2760]: I0128 00:58:15.753218 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g87x9\" (UniqueName: \"kubernetes.io/projected/ca219588-36d1-44cb-b7f0-f29129c91014-kube-api-access-g87x9\") pod \"csi-node-driver-jxgdl\" (UID: \"ca219588-36d1-44cb-b7f0-f29129c91014\") " pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:15.753991 kubelet[2760]: E0128 00:58:15.753961 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.753991 kubelet[2760]: W0128 00:58:15.753986 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.754055 kubelet[2760]: E0128 00:58:15.754026 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.754474 kubelet[2760]: E0128 00:58:15.754396 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.754474 kubelet[2760]: W0128 00:58:15.754466 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.755418 kubelet[2760]: E0128 00:58:15.754596 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.755418 kubelet[2760]: E0128 00:58:15.754878 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.755418 kubelet[2760]: W0128 00:58:15.754891 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.755418 kubelet[2760]: E0128 00:58:15.754991 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.755969 kubelet[2760]: E0128 00:58:15.755912 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.755969 kubelet[2760]: W0128 00:58:15.755945 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.756191 kubelet[2760]: E0128 00:58:15.756082 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.756475 kubelet[2760]: E0128 00:58:15.756363 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.756475 kubelet[2760]: W0128 00:58:15.756403 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.756807 kubelet[2760]: E0128 00:58:15.756605 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.757102 kubelet[2760]: E0128 00:58:15.757071 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.757102 kubelet[2760]: W0128 00:58:15.757095 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.757315 kubelet[2760]: E0128 00:58:15.757224 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.757492 kubelet[2760]: E0128 00:58:15.757462 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.757492 kubelet[2760]: W0128 00:58:15.757491 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.758078 kubelet[2760]: E0128 00:58:15.757644 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.758455 kubelet[2760]: E0128 00:58:15.758381 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.758455 kubelet[2760]: W0128 00:58:15.758413 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.758570 kubelet[2760]: E0128 00:58:15.758541 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.758801 kubelet[2760]: E0128 00:58:15.758752 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.758801 kubelet[2760]: W0128 00:58:15.758779 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.758894 kubelet[2760]: E0128 00:58:15.758865 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.762054 kubelet[2760]: E0128 00:58:15.762006 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.762054 kubelet[2760]: W0128 00:58:15.762037 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.762211 kubelet[2760]: E0128 00:58:15.762162 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.763239 kubelet[2760]: E0128 00:58:15.762361 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.763239 kubelet[2760]: W0128 00:58:15.762373 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.763239 kubelet[2760]: E0128 00:58:15.762496 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.763239 kubelet[2760]: E0128 00:58:15.762843 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.763239 kubelet[2760]: W0128 00:58:15.762854 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.763239 kubelet[2760]: E0128 00:58:15.762891 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.763239 kubelet[2760]: E0128 00:58:15.763209 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.763239 kubelet[2760]: W0128 00:58:15.763219 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.763239 kubelet[2760]: E0128 00:58:15.763235 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.763608 kubelet[2760]: E0128 00:58:15.763545 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.763608 kubelet[2760]: W0128 00:58:15.763555 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.763608 kubelet[2760]: E0128 00:58:15.763578 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.763961 kubelet[2760]: E0128 00:58:15.763902 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.763961 kubelet[2760]: W0128 00:58:15.763931 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.763961 kubelet[2760]: E0128 00:58:15.763946 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.764328 kubelet[2760]: E0128 00:58:15.764290 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.764328 kubelet[2760]: W0128 00:58:15.764321 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.764469 kubelet[2760]: E0128 00:58:15.764409 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.764746 kubelet[2760]: E0128 00:58:15.764728 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.764746 kubelet[2760]: W0128 00:58:15.764743 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.764803 kubelet[2760]: E0128 00:58:15.764779 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.765125 kubelet[2760]: E0128 00:58:15.765092 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.765125 kubelet[2760]: W0128 00:58:15.765113 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.765246 kubelet[2760]: E0128 00:58:15.765165 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.765502 kubelet[2760]: E0128 00:58:15.765422 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.765502 kubelet[2760]: W0128 00:58:15.765472 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.765575 kubelet[2760]: E0128 00:58:15.765523 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.765833 kubelet[2760]: E0128 00:58:15.765799 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.765833 kubelet[2760]: W0128 00:58:15.765820 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.765940 kubelet[2760]: E0128 00:58:15.765853 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.766137 kubelet[2760]: E0128 00:58:15.766101 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.766164 kubelet[2760]: W0128 00:58:15.766146 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.766241 kubelet[2760]: E0128 00:58:15.766208 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.766543 kubelet[2760]: E0128 00:58:15.766515 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.766543 kubelet[2760]: W0128 00:58:15.766537 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.766643 kubelet[2760]: E0128 00:58:15.766610 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.767052 kubelet[2760]: E0128 00:58:15.766987 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.767052 kubelet[2760]: W0128 00:58:15.767026 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.767140 kubelet[2760]: E0128 00:58:15.767084 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.767409 kubelet[2760]: E0128 00:58:15.767358 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.767409 kubelet[2760]: W0128 00:58:15.767384 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.767409 kubelet[2760]: E0128 00:58:15.767406 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.767875 kubelet[2760]: E0128 00:58:15.767840 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.767875 kubelet[2760]: W0128 00:58:15.767863 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.767951 kubelet[2760]: E0128 00:58:15.767901 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.768470 kubelet[2760]: E0128 00:58:15.768408 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.768470 kubelet[2760]: W0128 00:58:15.768454 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.768470 kubelet[2760]: E0128 00:58:15.768465 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.865773 kubelet[2760]: E0128 00:58:15.865402 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.865773 kubelet[2760]: W0128 00:58:15.865472 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.865773 kubelet[2760]: E0128 00:58:15.865498 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.866794 kubelet[2760]: E0128 00:58:15.866334 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.866794 kubelet[2760]: W0128 00:58:15.866351 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.866794 kubelet[2760]: E0128 00:58:15.866372 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.867661 kubelet[2760]: E0128 00:58:15.867478 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.867661 kubelet[2760]: W0128 00:58:15.867496 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.867661 kubelet[2760]: E0128 00:58:15.867516 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.867902 kubelet[2760]: E0128 00:58:15.867892 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.867999 kubelet[2760]: W0128 00:58:15.867907 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.867999 kubelet[2760]: E0128 00:58:15.867930 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.868539 kubelet[2760]: E0128 00:58:15.868164 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.868539 kubelet[2760]: W0128 00:58:15.868207 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.868539 kubelet[2760]: E0128 00:58:15.868384 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.868539 kubelet[2760]: E0128 00:58:15.868489 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.868539 kubelet[2760]: W0128 00:58:15.868499 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.868964 kubelet[2760]: E0128 00:58:15.868626 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.868964 kubelet[2760]: E0128 00:58:15.868833 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.869142 kubelet[2760]: W0128 00:58:15.868955 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.869142 kubelet[2760]: E0128 00:58:15.869030 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.869666 kubelet[2760]: E0128 00:58:15.869624 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.869666 kubelet[2760]: W0128 00:58:15.869657 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.869891 kubelet[2760]: E0128 00:58:15.869825 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.870120 kubelet[2760]: E0128 00:58:15.870058 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.870120 kubelet[2760]: W0128 00:58:15.870090 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.870204 kubelet[2760]: E0128 00:58:15.870192 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.870875 kubelet[2760]: E0128 00:58:15.870829 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.870875 kubelet[2760]: W0128 00:58:15.870863 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.871025 kubelet[2760]: E0128 00:58:15.870990 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.871298 kubelet[2760]: E0128 00:58:15.871263 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.871298 kubelet[2760]: W0128 00:58:15.871296 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.871509 kubelet[2760]: E0128 00:58:15.871399 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.871779 kubelet[2760]: E0128 00:58:15.871739 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.871779 kubelet[2760]: W0128 00:58:15.871767 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.871908 kubelet[2760]: E0128 00:58:15.871872 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.872194 kubelet[2760]: E0128 00:58:15.872149 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.872194 kubelet[2760]: W0128 00:58:15.872188 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.872547 kubelet[2760]: E0128 00:58:15.872336 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.872668 kubelet[2760]: E0128 00:58:15.872620 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.872810 kubelet[2760]: W0128 00:58:15.872675 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.873221 kubelet[2760]: E0128 00:58:15.872894 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.873221 kubelet[2760]: E0128 00:58:15.873214 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.873221 kubelet[2760]: W0128 00:58:15.873226 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.873426 kubelet[2760]: E0128 00:58:15.873291 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.874122 kubelet[2760]: E0128 00:58:15.873990 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.874122 kubelet[2760]: W0128 00:58:15.874011 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.874307 kubelet[2760]: E0128 00:58:15.874135 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.874820 kubelet[2760]: E0128 00:58:15.874778 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.874820 kubelet[2760]: W0128 00:58:15.874807 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.874997 kubelet[2760]: E0128 00:58:15.874862 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.875326 kubelet[2760]: E0128 00:58:15.875261 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.875326 kubelet[2760]: W0128 00:58:15.875288 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.875473 kubelet[2760]: E0128 00:58:15.875339 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.875634 kubelet[2760]: E0128 00:58:15.875605 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.875634 kubelet[2760]: W0128 00:58:15.875626 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.875810 kubelet[2760]: E0128 00:58:15.875758 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.876108 kubelet[2760]: E0128 00:58:15.876045 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.876108 kubelet[2760]: W0128 00:58:15.876087 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.876184 kubelet[2760]: E0128 00:58:15.876159 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.876550 kubelet[2760]: E0128 00:58:15.876492 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.876550 kubelet[2760]: W0128 00:58:15.876523 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.876622 kubelet[2760]: E0128 00:58:15.876603 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.877031 kubelet[2760]: E0128 00:58:15.876984 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.877031 kubelet[2760]: W0128 00:58:15.877009 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.877158 kubelet[2760]: E0128 00:58:15.877075 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.877365 kubelet[2760]: E0128 00:58:15.877321 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.877365 kubelet[2760]: W0128 00:58:15.877351 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.877830 kubelet[2760]: E0128 00:58:15.877633 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.877920 kubelet[2760]: E0128 00:58:15.877899 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.877920 kubelet[2760]: W0128 00:58:15.877911 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.877920 kubelet[2760]: E0128 00:58:15.877922 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.878471 kubelet[2760]: E0128 00:58:15.878410 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.878471 kubelet[2760]: W0128 00:58:15.878463 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.878533 kubelet[2760]: E0128 00:58:15.878475 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:15.890658 kubelet[2760]: E0128 00:58:15.890591 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:15.890658 kubelet[2760]: W0128 00:58:15.890628 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:15.890658 kubelet[2760]: E0128 00:58:15.890649 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:16.047616 kubelet[2760]: E0128 00:58:16.047532 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:16.048229 containerd[1590]: time="2026-01-28T00:58:16.048196105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ll5ql,Uid:5c2e938f-f2fe-43fb-8013-006b6eff4cd6,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:16.089042 containerd[1590]: time="2026-01-28T00:58:16.088761716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:16.090016 containerd[1590]: time="2026-01-28T00:58:16.089925091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:16.090016 containerd[1590]: time="2026-01-28T00:58:16.089958936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:16.090205 containerd[1590]: time="2026-01-28T00:58:16.090067723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:16.162830 containerd[1590]: time="2026-01-28T00:58:16.162762161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ll5ql,Uid:5c2e938f-f2fe-43fb-8013-006b6eff4cd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\"" Jan 28 00:58:16.164494 kubelet[2760]: E0128 00:58:16.164408 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:16.423254 systemd[1]: run-containerd-runc-k8s.io-42f5e3b7d02997d54ef6d99ba30cee08bb2ecc58afebf9c79874df2564b4c5eb-runc.O54Y3p.mount: Deactivated successfully. Jan 28 00:58:17.147290 containerd[1590]: time="2026-01-28T00:58:17.147123147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 28 00:58:17.147290 containerd[1590]: time="2026-01-28T00:58:17.147159337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:17.150806 containerd[1590]: time="2026-01-28T00:58:17.150763985Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:17.151845 containerd[1590]: time="2026-01-28T00:58:17.151796935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:17.153254 containerd[1590]: time="2026-01-28T00:58:17.153195754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.402287411s" Jan 28 00:58:17.153310 containerd[1590]: time="2026-01-28T00:58:17.153265187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 00:58:17.164615 containerd[1590]: time="2026-01-28T00:58:17.164568314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 00:58:17.193787 containerd[1590]: time="2026-01-28T00:58:17.193669056Z" level=info msg="CreateContainer within sandbox \"42f5e3b7d02997d54ef6d99ba30cee08bb2ecc58afebf9c79874df2564b4c5eb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 00:58:17.217635 containerd[1590]: time="2026-01-28T00:58:17.217525799Z" level=info msg="CreateContainer within sandbox \"42f5e3b7d02997d54ef6d99ba30cee08bb2ecc58afebf9c79874df2564b4c5eb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c370771f944f02f5ee4c7b03522a8893088b31efd76e933a5daeff4e054446a4\"" Jan 28 00:58:17.221150 containerd[1590]: time="2026-01-28T00:58:17.220979885Z" level=info msg="StartContainer for \"c370771f944f02f5ee4c7b03522a8893088b31efd76e933a5daeff4e054446a4\"" Jan 28 00:58:17.264008 kubelet[2760]: E0128 00:58:17.263406 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:17.375295 containerd[1590]: time="2026-01-28T00:58:17.375193568Z" level=info msg="StartContainer for \"c370771f944f02f5ee4c7b03522a8893088b31efd76e933a5daeff4e054446a4\" returns successfully" Jan 28 00:58:17.690456 kubelet[2760]: E0128 00:58:17.690371 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:17.738054 kubelet[2760]: E0128 00:58:17.737961 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.738054 kubelet[2760]: W0128 00:58:17.738009 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.738054 kubelet[2760]: E0128 00:58:17.738042 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.744853 kubelet[2760]: E0128 00:58:17.744045 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.744853 kubelet[2760]: W0128 00:58:17.744070 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.744853 kubelet[2760]: E0128 00:58:17.744098 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.744853 kubelet[2760]: E0128 00:58:17.744565 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.744853 kubelet[2760]: W0128 00:58:17.744576 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.744853 kubelet[2760]: E0128 00:58:17.744588 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.745192 kubelet[2760]: E0128 00:58:17.745156 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.745192 kubelet[2760]: W0128 00:58:17.745172 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.745279 kubelet[2760]: E0128 00:58:17.745236 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.748026 kubelet[2760]: I0128 00:58:17.746479 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66998f4864-cqcld" podStartSLOduration=1.333520983 podStartE2EDuration="2.74646069s" podCreationTimestamp="2026-01-28 00:58:15 +0000 UTC" firstStartedPulling="2026-01-28 00:58:15.750176154 +0000 UTC m=+26.728327778" lastFinishedPulling="2026-01-28 00:58:17.163115861 +0000 UTC m=+28.141267485" observedRunningTime="2026-01-28 00:58:17.737778998 +0000 UTC m=+28.715930631" watchObservedRunningTime="2026-01-28 00:58:17.74646069 +0000 UTC m=+28.724612313" Jan 28 00:58:17.748246 kubelet[2760]: E0128 00:58:17.748070 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.748246 kubelet[2760]: W0128 00:58:17.748140 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.748246 kubelet[2760]: E0128 00:58:17.748186 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.749811 kubelet[2760]: E0128 00:58:17.748837 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.749811 kubelet[2760]: W0128 00:58:17.748857 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.749811 kubelet[2760]: E0128 00:58:17.748876 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.754582 kubelet[2760]: E0128 00:58:17.753133 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.754867 kubelet[2760]: W0128 00:58:17.754737 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.754867 kubelet[2760]: E0128 00:58:17.754777 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.757023 kubelet[2760]: E0128 00:58:17.756770 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.759596 kubelet[2760]: W0128 00:58:17.757828 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.759596 kubelet[2760]: E0128 00:58:17.757869 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.762247 kubelet[2760]: E0128 00:58:17.761949 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.762247 kubelet[2760]: W0128 00:58:17.761976 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.762247 kubelet[2760]: E0128 00:58:17.761999 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.762640 kubelet[2760]: E0128 00:58:17.762620 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.762930 kubelet[2760]: W0128 00:58:17.762909 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.763012 kubelet[2760]: E0128 00:58:17.762994 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.764060 kubelet[2760]: E0128 00:58:17.763893 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.781781 kubelet[2760]: W0128 00:58:17.779957 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.781781 kubelet[2760]: E0128 00:58:17.780454 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.782941 kubelet[2760]: E0128 00:58:17.782873 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.782941 kubelet[2760]: W0128 00:58:17.782908 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.782941 kubelet[2760]: E0128 00:58:17.782927 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.784482 kubelet[2760]: E0128 00:58:17.784385 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.784482 kubelet[2760]: W0128 00:58:17.784421 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.784482 kubelet[2760]: E0128 00:58:17.784436 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.804637 kubelet[2760]: E0128 00:58:17.801835 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.804637 kubelet[2760]: W0128 00:58:17.801876 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.804637 kubelet[2760]: E0128 00:58:17.802046 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.804637 kubelet[2760]: E0128 00:58:17.803795 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.804637 kubelet[2760]: W0128 00:58:17.803883 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.804637 kubelet[2760]: E0128 00:58:17.803910 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.805621 kubelet[2760]: E0128 00:58:17.805021 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.805621 kubelet[2760]: W0128 00:58:17.805032 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.805621 kubelet[2760]: E0128 00:58:17.805046 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.805621 kubelet[2760]: E0128 00:58:17.805305 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.805621 kubelet[2760]: W0128 00:58:17.805315 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.805621 kubelet[2760]: E0128 00:58:17.805326 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.806066 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.812785 kubelet[2760]: W0128 00:58:17.806084 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.806106 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.807287 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.812785 kubelet[2760]: W0128 00:58:17.807302 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.807320 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.808087 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.812785 kubelet[2760]: W0128 00:58:17.808101 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.808585 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.812785 kubelet[2760]: E0128 00:58:17.808924 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.813300 kubelet[2760]: W0128 00:58:17.808937 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.813300 kubelet[2760]: E0128 00:58:17.809423 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.813300 kubelet[2760]: E0128 00:58:17.811937 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.813300 kubelet[2760]: W0128 00:58:17.811951 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.813300 kubelet[2760]: E0128 00:58:17.812463 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.813300 kubelet[2760]: E0128 00:58:17.812752 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.813300 kubelet[2760]: W0128 00:58:17.812762 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.813300 kubelet[2760]: E0128 00:58:17.813095 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.813619 kubelet[2760]: E0128 00:58:17.813558 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.813619 kubelet[2760]: W0128 00:58:17.813569 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.814072 kubelet[2760]: E0128 00:58:17.813912 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.814575 kubelet[2760]: E0128 00:58:17.814457 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.814575 kubelet[2760]: W0128 00:58:17.814505 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.815554 kubelet[2760]: E0128 00:58:17.814940 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.815765 kubelet[2760]: E0128 00:58:17.815649 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.815765 kubelet[2760]: W0128 00:58:17.815741 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.816289 kubelet[2760]: E0128 00:58:17.816235 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.832127 kubelet[2760]: E0128 00:58:17.831751 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.833270 kubelet[2760]: W0128 00:58:17.832437 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.835368 kubelet[2760]: E0128 00:58:17.834214 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.835368 kubelet[2760]: E0128 00:58:17.834516 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.835368 kubelet[2760]: W0128 00:58:17.834573 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.835368 kubelet[2760]: E0128 00:58:17.835184 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.837760 kubelet[2760]: E0128 00:58:17.836589 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.837760 kubelet[2760]: W0128 00:58:17.836622 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.837760 kubelet[2760]: E0128 00:58:17.837496 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.838754 kubelet[2760]: E0128 00:58:17.838661 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.839132 kubelet[2760]: W0128 00:58:17.839089 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.840436 kubelet[2760]: E0128 00:58:17.840367 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.842770 kubelet[2760]: E0128 00:58:17.842208 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.842770 kubelet[2760]: W0128 00:58:17.842232 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.842770 kubelet[2760]: E0128 00:58:17.842253 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.845995 kubelet[2760]: E0128 00:58:17.845924 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.845995 kubelet[2760]: W0128 00:58:17.845974 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.846252 kubelet[2760]: E0128 00:58:17.846190 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.846996 kubelet[2760]: E0128 00:58:17.846937 2760 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:58:17.846996 kubelet[2760]: W0128 00:58:17.846975 2760 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:58:17.846996 kubelet[2760]: E0128 00:58:17.846992 2760 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:58:17.988398 containerd[1590]: time="2026-01-28T00:58:17.987932756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:17.989338 containerd[1590]: time="2026-01-28T00:58:17.989285887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 28 00:58:17.991233 containerd[1590]: time="2026-01-28T00:58:17.991168680Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:17.996289 containerd[1590]: time="2026-01-28T00:58:17.996162302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:17.997260 containerd[1590]: time="2026-01-28T00:58:17.997145027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 832.528571ms" Jan 28 00:58:17.997260 containerd[1590]: time="2026-01-28T00:58:17.997196525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 00:58:18.001335 containerd[1590]: time="2026-01-28T00:58:18.000131605Z" level=info msg="CreateContainer within sandbox \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 00:58:18.032015 containerd[1590]: time="2026-01-28T00:58:18.031928837Z" level=info msg="CreateContainer within sandbox \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa213fa9363ee156c7eaabc3d77342f17f967d7a44543b444b2f254883bc66dd\"" Jan 28 00:58:18.033094 containerd[1590]: time="2026-01-28T00:58:18.032978249Z" level=info msg="StartContainer for \"fa213fa9363ee156c7eaabc3d77342f17f967d7a44543b444b2f254883bc66dd\"" Jan 28 00:58:18.360102 containerd[1590]: time="2026-01-28T00:58:18.359919939Z" level=info msg="StartContainer for \"fa213fa9363ee156c7eaabc3d77342f17f967d7a44543b444b2f254883bc66dd\" returns successfully" Jan 28 00:58:18.448947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa213fa9363ee156c7eaabc3d77342f17f967d7a44543b444b2f254883bc66dd-rootfs.mount: Deactivated successfully. Jan 28 00:58:18.464665 containerd[1590]: time="2026-01-28T00:58:18.461943792Z" level=info msg="shim disconnected" id=fa213fa9363ee156c7eaabc3d77342f17f967d7a44543b444b2f254883bc66dd namespace=k8s.io Jan 28 00:58:18.464665 containerd[1590]: time="2026-01-28T00:58:18.464661795Z" level=warning msg="cleaning up after shim disconnected" id=fa213fa9363ee156c7eaabc3d77342f17f967d7a44543b444b2f254883bc66dd namespace=k8s.io Jan 28 00:58:18.464665 containerd[1590]: time="2026-01-28T00:58:18.464678225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:58:18.499964 containerd[1590]: time="2026-01-28T00:58:18.499869443Z" level=warning msg="cleanup warnings time=\"2026-01-28T00:58:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 00:58:18.695607 kubelet[2760]: I0128 00:58:18.695453 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:58:18.697368 kubelet[2760]: E0128 00:58:18.695992 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:18.697368 kubelet[2760]: E0128 00:58:18.695997 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:18.701105 containerd[1590]: time="2026-01-28T00:58:18.700816448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 00:58:19.263848 kubelet[2760]: E0128 00:58:19.263744 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:19.741186 kubelet[2760]: E0128 00:58:19.741078 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:20.742282 kubelet[2760]: E0128 00:58:20.742203 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:21.267281 kubelet[2760]: E0128 00:58:21.266864 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:21.301431 containerd[1590]: time="2026-01-28T00:58:21.300943737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:21.304501 containerd[1590]: time="2026-01-28T00:58:21.304396598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 00:58:21.306368 containerd[1590]: time="2026-01-28T00:58:21.306302205Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:21.319630 containerd[1590]: time="2026-01-28T00:58:21.318958999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:21.327630 containerd[1590]: time="2026-01-28T00:58:21.327580822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.626628234s" Jan 28 00:58:21.328005 containerd[1590]: time="2026-01-28T00:58:21.327953991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 00:58:21.382113 containerd[1590]: time="2026-01-28T00:58:21.381969457Z" level=info msg="CreateContainer within sandbox \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 00:58:21.413857 containerd[1590]: time="2026-01-28T00:58:21.413767885Z" level=info msg="CreateContainer within sandbox \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8c4bcae2cd8ccac2cea35f46f079a3f53b3fb105d70e2cff1fdc3641db8381df\"" Jan 28 00:58:21.414641 containerd[1590]: time="2026-01-28T00:58:21.414562732Z" level=info msg="StartContainer for \"8c4bcae2cd8ccac2cea35f46f079a3f53b3fb105d70e2cff1fdc3641db8381df\"" Jan 28 00:58:21.556210 containerd[1590]: time="2026-01-28T00:58:21.555664873Z" level=info msg="StartContainer for \"8c4bcae2cd8ccac2cea35f46f079a3f53b3fb105d70e2cff1fdc3641db8381df\" returns successfully" Jan 28 00:58:21.749111 kubelet[2760]: E0128 00:58:21.748818 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:22.667599 kubelet[2760]: I0128 00:58:22.667519 2760 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 00:58:22.707419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c4bcae2cd8ccac2cea35f46f079a3f53b3fb105d70e2cff1fdc3641db8381df-rootfs.mount: Deactivated successfully. Jan 28 00:58:22.712348 containerd[1590]: time="2026-01-28T00:58:22.712194259Z" level=info msg="shim disconnected" id=8c4bcae2cd8ccac2cea35f46f079a3f53b3fb105d70e2cff1fdc3641db8381df namespace=k8s.io Jan 28 00:58:22.713008 containerd[1590]: time="2026-01-28T00:58:22.712356026Z" level=warning msg="cleaning up after shim disconnected" id=8c4bcae2cd8ccac2cea35f46f079a3f53b3fb105d70e2cff1fdc3641db8381df namespace=k8s.io Jan 28 00:58:22.713008 containerd[1590]: time="2026-01-28T00:58:22.712374501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 00:58:22.770561 kubelet[2760]: E0128 00:58:22.770490 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:22.792443 containerd[1590]: time="2026-01-28T00:58:22.792364626Z" level=warning msg="cleanup warnings time=\"2026-01-28T00:58:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 00:58:22.793193 kubelet[2760]: I0128 00:58:22.792877 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skff\" (UniqueName: \"kubernetes.io/projected/b6eec03e-d69e-4a77-be85-879339debc77-kube-api-access-9skff\") pod \"goldmane-666569f655-df4fc\" (UID: \"b6eec03e-d69e-4a77-be85-879339debc77\") " pod="calico-system/goldmane-666569f655-df4fc" Jan 28 00:58:22.793193 kubelet[2760]: I0128 00:58:22.792956 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfl5\" (UniqueName: \"kubernetes.io/projected/e87ecd7f-76fb-416b-97ac-bcf8061e4f34-kube-api-access-tjfl5\") pod \"calico-apiserver-d58fb4688-4dgpt\" (UID: \"e87ecd7f-76fb-416b-97ac-bcf8061e4f34\") " pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" Jan 28 00:58:22.793193 kubelet[2760]: I0128 00:58:22.792985 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/965f8593-0273-4890-b739-044411ceda00-whisker-backend-key-pair\") pod \"whisker-77ff8946cf-7wfpb\" (UID: \"965f8593-0273-4890-b739-044411ceda00\") " pod="calico-system/whisker-77ff8946cf-7wfpb" Jan 28 00:58:22.793193 kubelet[2760]: I0128 00:58:22.793012 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a1c4cdd-fd99-43db-b45c-fa57bb001ab8-config-volume\") pod \"coredns-668d6bf9bc-v27jx\" (UID: \"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8\") " pod="kube-system/coredns-668d6bf9bc-v27jx" Jan 28 00:58:22.793193 kubelet[2760]: I0128 00:58:22.793043 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54njd\" (UniqueName: \"kubernetes.io/projected/965f8593-0273-4890-b739-044411ceda00-kube-api-access-54njd\") pod \"whisker-77ff8946cf-7wfpb\" (UID: \"965f8593-0273-4890-b739-044411ceda00\") " pod="calico-system/whisker-77ff8946cf-7wfpb" Jan 28 00:58:22.794009 kubelet[2760]: I0128 00:58:22.793674 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7j2j\" (UniqueName: \"kubernetes.io/projected/9d40997d-8269-410f-a37f-77eca7302f00-kube-api-access-d7j2j\") pod \"calico-kube-controllers-55b8fb4bd5-kz6kn\" (UID: \"9d40997d-8269-410f-a37f-77eca7302f00\") " pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" Jan 28 00:58:22.794299 kubelet[2760]: I0128 00:58:22.794277 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgj9\" (UniqueName: \"kubernetes.io/projected/29716958-c780-41f2-b2ff-5fbdb74c3998-kube-api-access-4cgj9\") pod \"calico-apiserver-d58fb4688-vm2xw\" (UID: \"29716958-c780-41f2-b2ff-5fbdb74c3998\") " pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" Jan 28 00:58:22.795823 kubelet[2760]: I0128 00:58:22.795530 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/965f8593-0273-4890-b739-044411ceda00-whisker-ca-bundle\") pod \"whisker-77ff8946cf-7wfpb\" (UID: \"965f8593-0273-4890-b739-044411ceda00\") " pod="calico-system/whisker-77ff8946cf-7wfpb" Jan 28 00:58:22.795823 kubelet[2760]: I0128 00:58:22.795598 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lxhp\" (UniqueName: \"kubernetes.io/projected/93e9e1d3-02e7-468a-9d2c-3161f279d51b-kube-api-access-9lxhp\") pod \"coredns-668d6bf9bc-ms6pj\" (UID: \"93e9e1d3-02e7-468a-9d2c-3161f279d51b\") " pod="kube-system/coredns-668d6bf9bc-ms6pj" Jan 28 00:58:22.795823 kubelet[2760]: I0128 00:58:22.795628 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29716958-c780-41f2-b2ff-5fbdb74c3998-calico-apiserver-certs\") pod \"calico-apiserver-d58fb4688-vm2xw\" (UID: \"29716958-c780-41f2-b2ff-5fbdb74c3998\") " pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" Jan 28 00:58:22.795823 kubelet[2760]: I0128 00:58:22.795655 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d40997d-8269-410f-a37f-77eca7302f00-tigera-ca-bundle\") pod \"calico-kube-controllers-55b8fb4bd5-kz6kn\" (UID: \"9d40997d-8269-410f-a37f-77eca7302f00\") " pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" Jan 28 00:58:22.795823 kubelet[2760]: I0128 00:58:22.795681 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b6eec03e-d69e-4a77-be85-879339debc77-goldmane-key-pair\") pod \"goldmane-666569f655-df4fc\" (UID: \"b6eec03e-d69e-4a77-be85-879339debc77\") " pod="calico-system/goldmane-666569f655-df4fc" Jan 28 00:58:22.797877 kubelet[2760]: I0128 00:58:22.796954 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93e9e1d3-02e7-468a-9d2c-3161f279d51b-config-volume\") pod \"coredns-668d6bf9bc-ms6pj\" (UID: \"93e9e1d3-02e7-468a-9d2c-3161f279d51b\") " pod="kube-system/coredns-668d6bf9bc-ms6pj" Jan 28 00:58:22.797877 kubelet[2760]: I0128 00:58:22.797005 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6eec03e-d69e-4a77-be85-879339debc77-config\") pod \"goldmane-666569f655-df4fc\" (UID: \"b6eec03e-d69e-4a77-be85-879339debc77\") " pod="calico-system/goldmane-666569f655-df4fc" Jan 28 00:58:22.797877 kubelet[2760]: I0128 00:58:22.797068 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e87ecd7f-76fb-416b-97ac-bcf8061e4f34-calico-apiserver-certs\") pod \"calico-apiserver-d58fb4688-4dgpt\" (UID: \"e87ecd7f-76fb-416b-97ac-bcf8061e4f34\") " pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" Jan 28 00:58:22.797877 kubelet[2760]: I0128 00:58:22.797112 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn62k\" (UniqueName: \"kubernetes.io/projected/8a1c4cdd-fd99-43db-b45c-fa57bb001ab8-kube-api-access-cn62k\") pod \"coredns-668d6bf9bc-v27jx\" (UID: \"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8\") " pod="kube-system/coredns-668d6bf9bc-v27jx" Jan 28 00:58:22.797877 kubelet[2760]: I0128 00:58:22.797144 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6eec03e-d69e-4a77-be85-879339debc77-goldmane-ca-bundle\") pod \"goldmane-666569f655-df4fc\" (UID: \"b6eec03e-d69e-4a77-be85-879339debc77\") " pod="calico-system/goldmane-666569f655-df4fc" Jan 28 00:58:23.049047 containerd[1590]: time="2026-01-28T00:58:23.048797357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-4dgpt,Uid:e87ecd7f-76fb-416b-97ac-bcf8061e4f34,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:58:23.052067 kubelet[2760]: E0128 00:58:23.051989 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:23.052496 containerd[1590]: time="2026-01-28T00:58:23.052468151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms6pj,Uid:93e9e1d3-02e7-468a-9d2c-3161f279d51b,Namespace:kube-system,Attempt:0,}" Jan 28 00:58:23.090993 containerd[1590]: time="2026-01-28T00:58:23.090913075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8fb4bd5-kz6kn,Uid:9d40997d-8269-410f-a37f-77eca7302f00,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:23.092954 containerd[1590]: time="2026-01-28T00:58:23.092898415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-df4fc,Uid:b6eec03e-d69e-4a77-be85-879339debc77,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:23.106020 containerd[1590]: time="2026-01-28T00:58:23.105886948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-vm2xw,Uid:29716958-c780-41f2-b2ff-5fbdb74c3998,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:58:23.108773 kubelet[2760]: E0128 00:58:23.108641 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:23.117042 containerd[1590]: time="2026-01-28T00:58:23.116995046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v27jx,Uid:8a1c4cdd-fd99-43db-b45c-fa57bb001ab8,Namespace:kube-system,Attempt:0,}" Jan 28 00:58:23.117391 containerd[1590]: time="2026-01-28T00:58:23.117355592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77ff8946cf-7wfpb,Uid:965f8593-0273-4890-b739-044411ceda00,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:23.270498 containerd[1590]: time="2026-01-28T00:58:23.270395452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxgdl,Uid:ca219588-36d1-44cb-b7f0-f29129c91014,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:23.433633 containerd[1590]: time="2026-01-28T00:58:23.433464259Z" level=error msg="Failed to destroy network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.438561 containerd[1590]: time="2026-01-28T00:58:23.438476552Z" level=error msg="encountered an error cleaning up failed sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.438673 containerd[1590]: time="2026-01-28T00:58:23.438622350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-4dgpt,Uid:e87ecd7f-76fb-416b-97ac-bcf8061e4f34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.444935 containerd[1590]: time="2026-01-28T00:58:23.444898096Z" level=error msg="Failed to destroy network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.445806 containerd[1590]: time="2026-01-28T00:58:23.445648924Z" level=error msg="encountered an error cleaning up failed sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.446048 containerd[1590]: time="2026-01-28T00:58:23.445909679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms6pj,Uid:93e9e1d3-02e7-468a-9d2c-3161f279d51b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.450641 kubelet[2760]: E0128 00:58:23.450573 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.450831 kubelet[2760]: E0128 00:58:23.450772 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" Jan 28 00:58:23.451158 kubelet[2760]: E0128 00:58:23.450841 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" Jan 28 00:58:23.451158 kubelet[2760]: E0128 00:58:23.451048 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.451224 kubelet[2760]: E0128 00:58:23.451165 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ms6pj" Jan 28 00:58:23.451298 kubelet[2760]: E0128 00:58:23.451191 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ms6pj" Jan 28 00:58:23.451496 kubelet[2760]: E0128 00:58:23.451314 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ms6pj_kube-system(93e9e1d3-02e7-468a-9d2c-3161f279d51b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ms6pj_kube-system(93e9e1d3-02e7-468a-9d2c-3161f279d51b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ms6pj" podUID="93e9e1d3-02e7-468a-9d2c-3161f279d51b" Jan 28 00:58:23.451589 kubelet[2760]: E0128 00:58:23.451514 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d58fb4688-4dgpt_calico-apiserver(e87ecd7f-76fb-416b-97ac-bcf8061e4f34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d58fb4688-4dgpt_calico-apiserver(e87ecd7f-76fb-416b-97ac-bcf8061e4f34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:58:23.459300 containerd[1590]: time="2026-01-28T00:58:23.457363828Z" level=error msg="Failed to destroy network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.462108 containerd[1590]: time="2026-01-28T00:58:23.461992271Z" level=error msg="encountered an error cleaning up failed sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.462108 containerd[1590]: time="2026-01-28T00:58:23.462054849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-vm2xw,Uid:29716958-c780-41f2-b2ff-5fbdb74c3998,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.463919 kubelet[2760]: E0128 00:58:23.463168 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.463919 kubelet[2760]: E0128 00:58:23.463284 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" Jan 28 00:58:23.463919 kubelet[2760]: E0128 00:58:23.463306 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" Jan 28 00:58:23.464214 kubelet[2760]: E0128 00:58:23.463836 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d58fb4688-vm2xw_calico-apiserver(29716958-c780-41f2-b2ff-5fbdb74c3998)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d58fb4688-vm2xw_calico-apiserver(29716958-c780-41f2-b2ff-5fbdb74c3998)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:58:23.497437 containerd[1590]: time="2026-01-28T00:58:23.497366278Z" level=error msg="Failed to destroy network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.498436 containerd[1590]: time="2026-01-28T00:58:23.498390406Z" level=error msg="encountered an error cleaning up failed sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.498872 containerd[1590]: time="2026-01-28T00:58:23.498836053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v27jx,Uid:8a1c4cdd-fd99-43db-b45c-fa57bb001ab8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.499679 kubelet[2760]: E0128 00:58:23.499624 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.500568 kubelet[2760]: E0128 00:58:23.500187 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v27jx" Jan 28 00:58:23.500568 kubelet[2760]: E0128 00:58:23.500238 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v27jx" Jan 28 00:58:23.500568 kubelet[2760]: E0128 00:58:23.500297 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v27jx_kube-system(8a1c4cdd-fd99-43db-b45c-fa57bb001ab8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v27jx_kube-system(8a1c4cdd-fd99-43db-b45c-fa57bb001ab8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v27jx" podUID="8a1c4cdd-fd99-43db-b45c-fa57bb001ab8" Jan 28 00:58:23.510484 containerd[1590]: time="2026-01-28T00:58:23.510216724Z" level=error msg="Failed to destroy network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.511340 containerd[1590]: time="2026-01-28T00:58:23.511302438Z" level=error msg="encountered an error cleaning up failed sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.511674 containerd[1590]: time="2026-01-28T00:58:23.511508130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77ff8946cf-7wfpb,Uid:965f8593-0273-4890-b739-044411ceda00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.513946 kubelet[2760]: E0128 00:58:23.512629 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.513946 kubelet[2760]: E0128 00:58:23.513056 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77ff8946cf-7wfpb" Jan 28 00:58:23.513946 kubelet[2760]: E0128 00:58:23.513112 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77ff8946cf-7wfpb" Jan 28 00:58:23.514372 kubelet[2760]: E0128 00:58:23.513177 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77ff8946cf-7wfpb_calico-system(965f8593-0273-4890-b739-044411ceda00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77ff8946cf-7wfpb_calico-system(965f8593-0273-4890-b739-044411ceda00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77ff8946cf-7wfpb" podUID="965f8593-0273-4890-b739-044411ceda00" Jan 28 00:58:23.519525 containerd[1590]: time="2026-01-28T00:58:23.519479360Z" level=error msg="Failed to destroy network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.520191 containerd[1590]: time="2026-01-28T00:58:23.520165125Z" level=error msg="encountered an error cleaning up failed sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.520363 containerd[1590]: time="2026-01-28T00:58:23.520340828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8fb4bd5-kz6kn,Uid:9d40997d-8269-410f-a37f-77eca7302f00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.525287 kubelet[2760]: E0128 00:58:23.524549 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.525287 kubelet[2760]: E0128 00:58:23.524619 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" Jan 28 00:58:23.525287 kubelet[2760]: E0128 00:58:23.524647 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" Jan 28 00:58:23.525458 kubelet[2760]: E0128 00:58:23.525049 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55b8fb4bd5-kz6kn_calico-system(9d40997d-8269-410f-a37f-77eca7302f00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55b8fb4bd5-kz6kn_calico-system(9d40997d-8269-410f-a37f-77eca7302f00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:58:23.529621 containerd[1590]: time="2026-01-28T00:58:23.529509637Z" level=error msg="Failed to destroy network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.530585 containerd[1590]: time="2026-01-28T00:58:23.530453633Z" level=error msg="encountered an error cleaning up failed sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.530585 containerd[1590]: time="2026-01-28T00:58:23.530540427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-df4fc,Uid:b6eec03e-d69e-4a77-be85-879339debc77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.531223 kubelet[2760]: E0128 00:58:23.531149 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.531403 kubelet[2760]: E0128 00:58:23.531246 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-df4fc" Jan 28 00:58:23.531403 kubelet[2760]: E0128 00:58:23.531274 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-df4fc" Jan 28 00:58:23.531403 kubelet[2760]: E0128 00:58:23.531333 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-df4fc_calico-system(b6eec03e-d69e-4a77-be85-879339debc77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-df4fc_calico-system(b6eec03e-d69e-4a77-be85-879339debc77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:58:23.544250 containerd[1590]: time="2026-01-28T00:58:23.544184842Z" level=error msg="Failed to destroy network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.544789 containerd[1590]: time="2026-01-28T00:58:23.544681827Z" level=error msg="encountered an error cleaning up failed sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.544843 containerd[1590]: time="2026-01-28T00:58:23.544815291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxgdl,Uid:ca219588-36d1-44cb-b7f0-f29129c91014,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.545143 kubelet[2760]: E0128 00:58:23.545095 2760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.545252 kubelet[2760]: E0128 00:58:23.545167 2760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:23.545252 kubelet[2760]: E0128 00:58:23.545191 2760 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jxgdl" Jan 28 00:58:23.545305 kubelet[2760]: E0128 00:58:23.545246 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:23.776493 kubelet[2760]: I0128 00:58:23.776234 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:23.780179 kubelet[2760]: I0128 00:58:23.780037 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:23.783182 kubelet[2760]: I0128 00:58:23.783005 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:23.785443 kubelet[2760]: I0128 00:58:23.785330 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:23.794495 containerd[1590]: time="2026-01-28T00:58:23.794147321Z" level=info msg="StopPodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\"" Jan 28 00:58:23.794495 containerd[1590]: time="2026-01-28T00:58:23.794193578Z" level=info msg="StopPodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\"" Jan 28 00:58:23.794495 containerd[1590]: time="2026-01-28T00:58:23.794589803Z" level=info msg="StopPodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\"" Jan 28 00:58:23.796263 containerd[1590]: time="2026-01-28T00:58:23.794664452Z" level=info msg="StopPodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\"" Jan 28 00:58:23.796263 containerd[1590]: time="2026-01-28T00:58:23.796191829Z" level=info msg="Ensure that sandbox 8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3 in task-service has been cleanup successfully" Jan 28 00:58:23.796263 containerd[1590]: time="2026-01-28T00:58:23.796217923Z" level=info msg="Ensure that sandbox f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0 in task-service has been cleanup successfully" Jan 28 00:58:23.797162 kubelet[2760]: I0128 00:58:23.796612 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:23.797246 containerd[1590]: time="2026-01-28T00:58:23.796827589Z" level=info msg="Ensure that sandbox 5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55 in task-service has been cleanup successfully" Jan 28 00:58:23.808003 containerd[1590]: time="2026-01-28T00:58:23.807329136Z" level=info msg="Ensure that sandbox 8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788 in task-service has been cleanup successfully" Jan 28 00:58:23.808003 containerd[1590]: time="2026-01-28T00:58:23.807875578Z" level=info msg="StopPodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\"" Jan 28 00:58:23.808215 containerd[1590]: time="2026-01-28T00:58:23.808143799Z" level=info msg="Ensure that sandbox cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75 in task-service has been cleanup successfully" Jan 28 00:58:23.820823 kubelet[2760]: E0128 00:58:23.820648 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:23.823601 kubelet[2760]: I0128 00:58:23.823396 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:23.826350 containerd[1590]: time="2026-01-28T00:58:23.825930016Z" level=info msg="StopPodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\"" Jan 28 00:58:23.826350 containerd[1590]: time="2026-01-28T00:58:23.826092536Z" level=info msg="Ensure that sandbox d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48 in task-service has been cleanup successfully" Jan 28 00:58:23.827802 containerd[1590]: time="2026-01-28T00:58:23.827196159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 00:58:23.831001 kubelet[2760]: I0128 00:58:23.830924 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:23.833041 containerd[1590]: time="2026-01-28T00:58:23.832957287Z" level=info msg="StopPodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\"" Jan 28 00:58:23.833643 containerd[1590]: time="2026-01-28T00:58:23.833237160Z" level=info msg="Ensure that sandbox 867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba in task-service has been cleanup successfully" Jan 28 00:58:23.836516 kubelet[2760]: I0128 00:58:23.835955 2760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:23.848328 containerd[1590]: time="2026-01-28T00:58:23.848224224Z" level=info msg="StopPodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\"" Jan 28 00:58:23.858214 containerd[1590]: time="2026-01-28T00:58:23.857840893Z" level=info msg="Ensure that sandbox bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73 in task-service has been cleanup successfully" Jan 28 00:58:23.955155 containerd[1590]: time="2026-01-28T00:58:23.955109757Z" level=error msg="StopPodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" failed" error="failed to destroy network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:23.956639 kubelet[2760]: E0128 00:58:23.956461 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:23.956946 kubelet[2760]: E0128 00:58:23.956807 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3"} Jan 28 00:58:23.956998 kubelet[2760]: E0128 00:58:23.956948 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:23.957093 kubelet[2760]: E0128 00:58:23.957058 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v27jx" podUID="8a1c4cdd-fd99-43db-b45c-fa57bb001ab8" Jan 28 00:58:24.063152 containerd[1590]: time="2026-01-28T00:58:24.060993343Z" level=error msg="StopPodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" failed" error="failed to destroy network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.073599 containerd[1590]: time="2026-01-28T00:58:24.073526358Z" level=error msg="StopPodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" failed" error="failed to destroy network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.074448 kubelet[2760]: E0128 00:58:24.074245 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:24.074901 kubelet[2760]: E0128 00:58:24.074595 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55"} Jan 28 00:58:24.074901 kubelet[2760]: E0128 00:58:24.074642 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca219588-36d1-44cb-b7f0-f29129c91014\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.075807 kubelet[2760]: E0128 00:58:24.075423 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:24.075807 kubelet[2760]: E0128 00:58:24.075732 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75"} Jan 28 00:58:24.076146 kubelet[2760]: E0128 00:58:24.075943 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca219588-36d1-44cb-b7f0-f29129c91014\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:24.076146 kubelet[2760]: E0128 00:58:24.076102 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d40997d-8269-410f-a37f-77eca7302f00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.076146 kubelet[2760]: E0128 00:58:24.076122 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d40997d-8269-410f-a37f-77eca7302f00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:58:24.082142 containerd[1590]: time="2026-01-28T00:58:24.082079468Z" level=error msg="StopPodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" failed" error="failed to destroy network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.082821 kubelet[2760]: E0128 00:58:24.082678 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:24.084076 kubelet[2760]: E0128 00:58:24.082948 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788"} Jan 28 00:58:24.084076 kubelet[2760]: E0128 00:58:24.083007 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e87ecd7f-76fb-416b-97ac-bcf8061e4f34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.084076 kubelet[2760]: E0128 00:58:24.083040 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e87ecd7f-76fb-416b-97ac-bcf8061e4f34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:58:24.086658 containerd[1590]: time="2026-01-28T00:58:24.086576916Z" level=error msg="StopPodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" failed" error="failed to destroy network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.088005 kubelet[2760]: E0128 00:58:24.086993 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:24.088005 kubelet[2760]: E0128 00:58:24.087042 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48"} Jan 28 00:58:24.088005 kubelet[2760]: E0128 00:58:24.087077 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93e9e1d3-02e7-468a-9d2c-3161f279d51b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.088005 kubelet[2760]: E0128 00:58:24.087111 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93e9e1d3-02e7-468a-9d2c-3161f279d51b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ms6pj" podUID="93e9e1d3-02e7-468a-9d2c-3161f279d51b" Jan 28 00:58:24.089282 containerd[1590]: time="2026-01-28T00:58:24.088331281Z" level=error msg="StopPodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" failed" error="failed to destroy network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.089358 kubelet[2760]: E0128 00:58:24.088594 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:24.089358 kubelet[2760]: E0128 00:58:24.088626 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0"} Jan 28 00:58:24.089358 kubelet[2760]: E0128 00:58:24.088653 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"965f8593-0273-4890-b739-044411ceda00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.089358 kubelet[2760]: E0128 00:58:24.088676 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"965f8593-0273-4890-b739-044411ceda00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77ff8946cf-7wfpb" podUID="965f8593-0273-4890-b739-044411ceda00" Jan 28 00:58:24.097938 containerd[1590]: time="2026-01-28T00:58:24.097852572Z" level=error msg="StopPodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" failed" error="failed to destroy network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.098294 kubelet[2760]: E0128 00:58:24.098191 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:24.098294 kubelet[2760]: E0128 00:58:24.098251 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba"} Jan 28 00:58:24.098464 kubelet[2760]: E0128 00:58:24.098335 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29716958-c780-41f2-b2ff-5fbdb74c3998\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.098464 kubelet[2760]: E0128 00:58:24.098378 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29716958-c780-41f2-b2ff-5fbdb74c3998\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:58:24.107758 containerd[1590]: time="2026-01-28T00:58:24.107618347Z" level=error msg="StopPodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" failed" error="failed to destroy network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:58:24.108098 kubelet[2760]: E0128 00:58:24.108044 2760 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:24.108192 kubelet[2760]: E0128 00:58:24.108113 2760 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73"} Jan 28 00:58:24.108192 kubelet[2760]: E0128 00:58:24.108150 2760 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6eec03e-d69e-4a77-be85-879339debc77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 00:58:24.108192 kubelet[2760]: E0128 00:58:24.108177 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6eec03e-d69e-4a77-be85-879339debc77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:58:30.338218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438842687.mount: Deactivated successfully. Jan 28 00:58:30.579308 containerd[1590]: time="2026-01-28T00:58:30.579112925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:30.581512 containerd[1590]: time="2026-01-28T00:58:30.581365452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 00:58:30.586862 containerd[1590]: time="2026-01-28T00:58:30.586641673Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:30.592261 containerd[1590]: time="2026-01-28T00:58:30.592017878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:58:30.593802 containerd[1590]: time="2026-01-28T00:58:30.593480462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.765962962s" Jan 28 00:58:30.593802 containerd[1590]: time="2026-01-28T00:58:30.593525156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 00:58:30.612169 containerd[1590]: time="2026-01-28T00:58:30.611952203Z" level=info msg="CreateContainer within sandbox \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 00:58:30.653195 containerd[1590]: time="2026-01-28T00:58:30.653118044Z" level=info msg="CreateContainer within sandbox \"956bd19036b9a0628ca6cbcf8df6d66961a78c144d2ee530d9a9ab0cbd8d46ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c420a7f9d21e872198f44dd85b3b276187ecbe24557d4b7aff38a0179cd81ef1\"" Jan 28 00:58:30.656805 containerd[1590]: time="2026-01-28T00:58:30.654203452Z" level=info msg="StartContainer for \"c420a7f9d21e872198f44dd85b3b276187ecbe24557d4b7aff38a0179cd81ef1\"" Jan 28 00:58:30.850338 containerd[1590]: time="2026-01-28T00:58:30.850186918Z" level=info msg="StartContainer for \"c420a7f9d21e872198f44dd85b3b276187ecbe24557d4b7aff38a0179cd81ef1\" returns successfully" Jan 28 00:58:31.003732 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 00:58:31.005120 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 00:58:31.099928 kubelet[2760]: E0128 00:58:31.099876 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:31.144754 kubelet[2760]: I0128 00:58:31.144615 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ll5ql" podStartSLOduration=1.715498086 podStartE2EDuration="16.144595667s" podCreationTimestamp="2026-01-28 00:58:15 +0000 UTC" firstStartedPulling="2026-01-28 00:58:16.166578908 +0000 UTC m=+27.144730532" lastFinishedPulling="2026-01-28 00:58:30.59567649 +0000 UTC m=+41.573828113" observedRunningTime="2026-01-28 00:58:31.144490502 +0000 UTC m=+42.122642135" watchObservedRunningTime="2026-01-28 00:58:31.144595667 +0000 UTC m=+42.122747291" Jan 28 00:58:31.252967 containerd[1590]: time="2026-01-28T00:58:31.252477382Z" level=info msg="StopPodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\"" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.373 [INFO][4064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.374 [INFO][4064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" iface="eth0" netns="/var/run/netns/cni-ab712c3e-72b0-f217-18c9-bb37ae32af2e" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.375 [INFO][4064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" iface="eth0" netns="/var/run/netns/cni-ab712c3e-72b0-f217-18c9-bb37ae32af2e" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.375 [INFO][4064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" iface="eth0" netns="/var/run/netns/cni-ab712c3e-72b0-f217-18c9-bb37ae32af2e" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.375 [INFO][4064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.375 [INFO][4064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.827 [INFO][4072] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.875 [INFO][4072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.879 [INFO][4072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.975 [WARNING][4072] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.978 [INFO][4072] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.985 [INFO][4072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:31.993564 containerd[1590]: 2026-01-28 00:58:31.990 [INFO][4064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:31.998232 containerd[1590]: time="2026-01-28T00:58:31.995147700Z" level=info msg="TearDown network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" successfully" Jan 28 00:58:31.998232 containerd[1590]: time="2026-01-28T00:58:31.995387394Z" level=info msg="StopPodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" returns successfully" Jan 28 00:58:32.000461 systemd[1]: run-netns-cni\x2dab712c3e\x2d72b0\x2df217\x2d18c9\x2dbb37ae32af2e.mount: Deactivated successfully. Jan 28 00:58:32.105162 kubelet[2760]: I0128 00:58:32.104500 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54njd\" (UniqueName: \"kubernetes.io/projected/965f8593-0273-4890-b739-044411ceda00-kube-api-access-54njd\") pod \"965f8593-0273-4890-b739-044411ceda00\" (UID: \"965f8593-0273-4890-b739-044411ceda00\") " Jan 28 00:58:32.105162 kubelet[2760]: I0128 00:58:32.105074 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/965f8593-0273-4890-b739-044411ceda00-whisker-backend-key-pair\") pod \"965f8593-0273-4890-b739-044411ceda00\" (UID: \"965f8593-0273-4890-b739-044411ceda00\") " Jan 28 00:58:32.105162 kubelet[2760]: I0128 00:58:32.105118 2760 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/965f8593-0273-4890-b739-044411ceda00-whisker-ca-bundle\") pod \"965f8593-0273-4890-b739-044411ceda00\" (UID: \"965f8593-0273-4890-b739-044411ceda00\") " Jan 28 00:58:32.110855 kubelet[2760]: I0128 00:58:32.110756 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/965f8593-0273-4890-b739-044411ceda00-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "965f8593-0273-4890-b739-044411ceda00" (UID: "965f8593-0273-4890-b739-044411ceda00"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:58:32.117194 kubelet[2760]: I0128 00:58:32.117094 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965f8593-0273-4890-b739-044411ceda00-kube-api-access-54njd" (OuterVolumeSpecName: "kube-api-access-54njd") pod "965f8593-0273-4890-b739-044411ceda00" (UID: "965f8593-0273-4890-b739-044411ceda00"). InnerVolumeSpecName "kube-api-access-54njd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:58:32.117194 kubelet[2760]: I0128 00:58:32.117152 2760 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/965f8593-0273-4890-b739-044411ceda00-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "965f8593-0273-4890-b739-044411ceda00" (UID: "965f8593-0273-4890-b739-044411ceda00"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:58:32.122233 systemd[1]: var-lib-kubelet-pods-965f8593\x2d0273\x2d4890\x2db739\x2d044411ceda00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54njd.mount: Deactivated successfully. Jan 28 00:58:32.123990 systemd[1]: var-lib-kubelet-pods-965f8593\x2d0273\x2d4890\x2db739\x2d044411ceda00-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 00:58:32.206746 kubelet[2760]: I0128 00:58:32.206585 2760 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/965f8593-0273-4890-b739-044411ceda00-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 00:58:32.206746 kubelet[2760]: I0128 00:58:32.206665 2760 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54njd\" (UniqueName: \"kubernetes.io/projected/965f8593-0273-4890-b739-044411ceda00-kube-api-access-54njd\") on node \"localhost\" DevicePath \"\"" Jan 28 00:58:32.206746 kubelet[2760]: I0128 00:58:32.206769 2760 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/965f8593-0273-4890-b739-044411ceda00-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 00:58:32.415268 kubelet[2760]: I0128 00:58:32.413826 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mh4p\" (UniqueName: \"kubernetes.io/projected/f873fe7c-2fd9-4543-9ebd-959fbca499b0-kube-api-access-8mh4p\") pod \"whisker-5fd76b96dc-mbjdc\" (UID: \"f873fe7c-2fd9-4543-9ebd-959fbca499b0\") " pod="calico-system/whisker-5fd76b96dc-mbjdc" Jan 28 00:58:32.415268 kubelet[2760]: I0128 00:58:32.415138 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f873fe7c-2fd9-4543-9ebd-959fbca499b0-whisker-backend-key-pair\") pod \"whisker-5fd76b96dc-mbjdc\" (UID: \"f873fe7c-2fd9-4543-9ebd-959fbca499b0\") " pod="calico-system/whisker-5fd76b96dc-mbjdc" Jan 28 00:58:32.415268 kubelet[2760]: I0128 00:58:32.415164 2760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f873fe7c-2fd9-4543-9ebd-959fbca499b0-whisker-ca-bundle\") pod \"whisker-5fd76b96dc-mbjdc\" (UID: \"f873fe7c-2fd9-4543-9ebd-959fbca499b0\") " pod="calico-system/whisker-5fd76b96dc-mbjdc" Jan 28 00:58:32.583116 containerd[1590]: time="2026-01-28T00:58:32.582955749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd76b96dc-mbjdc,Uid:f873fe7c-2fd9-4543-9ebd-959fbca499b0,Namespace:calico-system,Attempt:0,}" Jan 28 00:58:32.783086 systemd-networkd[1249]: cali580c5f2a0da: Link UP Jan 28 00:58:32.785805 systemd-networkd[1249]: cali580c5f2a0da: Gained carrier Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.651 [INFO][4095] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.671 [INFO][4095] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0 whisker-5fd76b96dc- calico-system f873fe7c-2fd9-4543-9ebd-959fbca499b0 918 0 2026-01-28 00:58:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fd76b96dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5fd76b96dc-mbjdc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali580c5f2a0da [] [] }} ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.671 [INFO][4095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.713 [INFO][4110] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" HandleID="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Workload="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.713 [INFO][4110] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" HandleID="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Workload="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000520130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5fd76b96dc-mbjdc", "timestamp":"2026-01-28 00:58:32.713433454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.713 [INFO][4110] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.713 [INFO][4110] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.714 [INFO][4110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.724 [INFO][4110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.744 [INFO][4110] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.750 [INFO][4110] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.752 [INFO][4110] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.755 [INFO][4110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.755 [INFO][4110] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.756 [INFO][4110] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.763 [INFO][4110] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.768 [INFO][4110] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.768 [INFO][4110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" host="localhost" Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.768 [INFO][4110] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:32.803166 containerd[1590]: 2026-01-28 00:58:32.768 [INFO][4110] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" HandleID="k8s-pod-network.70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Workload="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.804018 containerd[1590]: 2026-01-28 00:58:32.772 [INFO][4095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0", GenerateName:"whisker-5fd76b96dc-", Namespace:"calico-system", SelfLink:"", UID:"f873fe7c-2fd9-4543-9ebd-959fbca499b0", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fd76b96dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5fd76b96dc-mbjdc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali580c5f2a0da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:32.804018 containerd[1590]: 2026-01-28 00:58:32.772 [INFO][4095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.804018 containerd[1590]: 2026-01-28 00:58:32.772 [INFO][4095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580c5f2a0da ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.804018 containerd[1590]: 2026-01-28 00:58:32.787 [INFO][4095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.804018 containerd[1590]: 2026-01-28 00:58:32.787 [INFO][4095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0", GenerateName:"whisker-5fd76b96dc-", Namespace:"calico-system", SelfLink:"", UID:"f873fe7c-2fd9-4543-9ebd-959fbca499b0", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fd76b96dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d", Pod:"whisker-5fd76b96dc-mbjdc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali580c5f2a0da", MAC:"ca:1e:65:25:fb:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:32.804018 containerd[1590]: 2026-01-28 00:58:32.798 [INFO][4095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d" Namespace="calico-system" Pod="whisker-5fd76b96dc-mbjdc" WorkloadEndpoint="localhost-k8s-whisker--5fd76b96dc--mbjdc-eth0" Jan 28 00:58:32.850308 containerd[1590]: time="2026-01-28T00:58:32.849970109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:32.852032 containerd[1590]: time="2026-01-28T00:58:32.851872644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:32.852032 containerd[1590]: time="2026-01-28T00:58:32.851918190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:32.852273 containerd[1590]: time="2026-01-28T00:58:32.852139279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:32.886980 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:32.961465 containerd[1590]: time="2026-01-28T00:58:32.961367214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd76b96dc-mbjdc,Uid:f873fe7c-2fd9-4543-9ebd-959fbca499b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"70ecc00abde3764a3378d5fda00fdb5d7bad3e40b19f3b441026aeee7f7e644d\"" Jan 28 00:58:32.964919 containerd[1590]: time="2026-01-28T00:58:32.964877493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:58:33.038520 containerd[1590]: time="2026-01-28T00:58:33.038314591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:33.070660 containerd[1590]: time="2026-01-28T00:58:33.041199084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:58:33.070932 containerd[1590]: time="2026-01-28T00:58:33.042110570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:58:33.072768 kubelet[2760]: E0128 00:58:33.071186 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:58:33.072768 kubelet[2760]: E0128 00:58:33.071247 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:58:33.075956 kubelet[2760]: E0128 00:58:33.074405 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:720b431776f9430c801351a09b535fb1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:33.079270 containerd[1590]: time="2026-01-28T00:58:33.079197700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:58:33.172445 containerd[1590]: time="2026-01-28T00:58:33.172350730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:33.174500 containerd[1590]: time="2026-01-28T00:58:33.174276126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:58:33.174500 containerd[1590]: time="2026-01-28T00:58:33.174403157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:58:33.175244 kubelet[2760]: E0128 00:58:33.174796 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:58:33.175244 kubelet[2760]: E0128 00:58:33.174853 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:58:33.175893 kubelet[2760]: E0128 00:58:33.174965 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:33.176442 kubelet[2760]: E0128 00:58:33.176364 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:58:33.272840 kubelet[2760]: I0128 00:58:33.272400 2760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965f8593-0273-4890-b739-044411ceda00" path="/var/lib/kubelet/pods/965f8593-0273-4890-b739-044411ceda00/volumes" Jan 28 00:58:33.418841 kernel: bpftool[4300]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 00:58:33.726942 systemd-networkd[1249]: vxlan.calico: Link UP Jan 28 00:58:33.726953 systemd-networkd[1249]: vxlan.calico: Gained carrier Jan 28 00:58:34.135183 kubelet[2760]: E0128 00:58:34.134842 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:58:34.262046 systemd-networkd[1249]: cali580c5f2a0da: Gained IPv6LL Jan 28 00:58:34.265176 containerd[1590]: time="2026-01-28T00:58:34.265029193Z" level=info msg="StopPodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\"" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.346 [INFO][4386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.347 [INFO][4386] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" iface="eth0" netns="/var/run/netns/cni-562d4d05-6f0d-d71a-3a22-808e73fe0e01" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.347 [INFO][4386] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" iface="eth0" netns="/var/run/netns/cni-562d4d05-6f0d-d71a-3a22-808e73fe0e01" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.348 [INFO][4386] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" iface="eth0" netns="/var/run/netns/cni-562d4d05-6f0d-d71a-3a22-808e73fe0e01" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.348 [INFO][4386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.348 [INFO][4386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.392 [INFO][4394] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.392 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.392 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.401 [WARNING][4394] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.401 [INFO][4394] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.405 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:34.412396 containerd[1590]: 2026-01-28 00:58:34.409 [INFO][4386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:34.413124 containerd[1590]: time="2026-01-28T00:58:34.412535109Z" level=info msg="TearDown network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" successfully" Jan 28 00:58:34.413124 containerd[1590]: time="2026-01-28T00:58:34.412576227Z" level=info msg="StopPodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" returns successfully" Jan 28 00:58:34.414752 containerd[1590]: time="2026-01-28T00:58:34.414176587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-vm2xw,Uid:29716958-c780-41f2-b2ff-5fbdb74c3998,Namespace:calico-apiserver,Attempt:1,}" Jan 28 00:58:34.418841 systemd[1]: run-netns-cni\x2d562d4d05\x2d6f0d\x2dd71a\x2d3a22\x2d808e73fe0e01.mount: Deactivated successfully. Jan 28 00:58:34.589631 systemd-networkd[1249]: cali0cf5357259c: Link UP Jan 28 00:58:34.591217 systemd-networkd[1249]: cali0cf5357259c: Gained carrier Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.482 [INFO][4402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0 calico-apiserver-d58fb4688- calico-apiserver 29716958-c780-41f2-b2ff-5fbdb74c3998 940 0 2026-01-28 00:58:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d58fb4688 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d58fb4688-vm2xw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0cf5357259c [] [] }} ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.482 [INFO][4402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.529 [INFO][4417] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" HandleID="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.529 [INFO][4417] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" HandleID="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d58fb4688-vm2xw", "timestamp":"2026-01-28 00:58:34.529504513 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.529 [INFO][4417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.529 [INFO][4417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.529 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.541 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.550 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.558 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.561 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.565 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.565 [INFO][4417] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.568 [INFO][4417] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7 Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.574 [INFO][4417] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.581 [INFO][4417] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.581 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" host="localhost" Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.581 [INFO][4417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:34.611739 containerd[1590]: 2026-01-28 00:58:34.581 [INFO][4417] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" HandleID="k8s-pod-network.39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.613499 containerd[1590]: 2026-01-28 00:58:34.586 [INFO][4402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"29716958-c780-41f2-b2ff-5fbdb74c3998", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d58fb4688-vm2xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cf5357259c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:34.613499 containerd[1590]: 2026-01-28 00:58:34.587 [INFO][4402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.613499 containerd[1590]: 2026-01-28 00:58:34.587 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cf5357259c ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.613499 containerd[1590]: 2026-01-28 00:58:34.591 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.613499 containerd[1590]: 2026-01-28 00:58:34.595 [INFO][4402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"29716958-c780-41f2-b2ff-5fbdb74c3998", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7", Pod:"calico-apiserver-d58fb4688-vm2xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cf5357259c", MAC:"ba:36:4d:94:f0:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:34.613499 containerd[1590]: 2026-01-28 00:58:34.607 [INFO][4402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-vm2xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:34.643310 containerd[1590]: time="2026-01-28T00:58:34.642963943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:34.643310 containerd[1590]: time="2026-01-28T00:58:34.643018377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:34.643310 containerd[1590]: time="2026-01-28T00:58:34.643028446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:34.643310 containerd[1590]: time="2026-01-28T00:58:34.643171207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:34.683957 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:34.733480 containerd[1590]: time="2026-01-28T00:58:34.733321777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-vm2xw,Uid:29716958-c780-41f2-b2ff-5fbdb74c3998,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7\"" Jan 28 00:58:34.736289 containerd[1590]: time="2026-01-28T00:58:34.735943241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:58:34.799317 containerd[1590]: time="2026-01-28T00:58:34.799065836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:34.802014 containerd[1590]: time="2026-01-28T00:58:34.801923824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:58:34.802157 containerd[1590]: time="2026-01-28T00:58:34.802085533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:34.802428 kubelet[2760]: E0128 00:58:34.802329 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:34.802428 kubelet[2760]: E0128 00:58:34.802416 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:34.803012 kubelet[2760]: E0128 00:58:34.802607 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cgj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-vm2xw_calico-apiserver(29716958-c780-41f2-b2ff-5fbdb74c3998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:34.804015 kubelet[2760]: E0128 00:58:34.803962 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:58:34.965987 systemd-networkd[1249]: vxlan.calico: Gained IPv6LL Jan 28 00:58:35.138924 kubelet[2760]: E0128 00:58:35.138882 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:58:35.266022 containerd[1590]: time="2026-01-28T00:58:35.265901183Z" level=info msg="StopPodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\"" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.337 [INFO][4490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.337 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" iface="eth0" netns="/var/run/netns/cni-1151980e-be60-5a65-ccfc-300195574d76" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.338 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" iface="eth0" netns="/var/run/netns/cni-1151980e-be60-5a65-ccfc-300195574d76" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.338 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" iface="eth0" netns="/var/run/netns/cni-1151980e-be60-5a65-ccfc-300195574d76" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.338 [INFO][4490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.338 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.366 [INFO][4499] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.366 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.366 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.373 [WARNING][4499] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.373 [INFO][4499] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.376 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:35.380766 containerd[1590]: 2026-01-28 00:58:35.378 [INFO][4490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:35.384049 containerd[1590]: time="2026-01-28T00:58:35.383977978Z" level=info msg="TearDown network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" successfully" Jan 28 00:58:35.384049 containerd[1590]: time="2026-01-28T00:58:35.384034857Z" level=info msg="StopPodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" returns successfully" Jan 28 00:58:35.384181 systemd[1]: run-netns-cni\x2d1151980e\x2dbe60\x2d5a65\x2dccfc\x2d300195574d76.mount: Deactivated successfully. Jan 28 00:58:35.385581 containerd[1590]: time="2026-01-28T00:58:35.385160947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxgdl,Uid:ca219588-36d1-44cb-b7f0-f29129c91014,Namespace:calico-system,Attempt:1,}" Jan 28 00:58:35.569621 systemd-networkd[1249]: cali0c7f691e372: Link UP Jan 28 00:58:35.571509 systemd-networkd[1249]: cali0c7f691e372: Gained carrier Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.457 [INFO][4508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jxgdl-eth0 csi-node-driver- calico-system ca219588-36d1-44cb-b7f0-f29129c91014 953 0 2026-01-28 00:58:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jxgdl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0c7f691e372 [] [] }} ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.458 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.499 [INFO][4521] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" HandleID="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.500 [INFO][4521] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" HandleID="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jxgdl", "timestamp":"2026-01-28 00:58:35.499781756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.500 [INFO][4521] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.500 [INFO][4521] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.500 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.518 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.527 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.535 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.538 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.542 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.542 [INFO][4521] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.544 [INFO][4521] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.552 [INFO][4521] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.559 [INFO][4521] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.560 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" host="localhost" Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.560 [INFO][4521] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:35.594826 containerd[1590]: 2026-01-28 00:58:35.560 [INFO][4521] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" HandleID="k8s-pod-network.6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.597356 containerd[1590]: 2026-01-28 00:58:35.563 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jxgdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca219588-36d1-44cb-b7f0-f29129c91014", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jxgdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c7f691e372", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:35.597356 containerd[1590]: 2026-01-28 00:58:35.564 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.597356 containerd[1590]: 2026-01-28 00:58:35.564 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c7f691e372 ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.597356 containerd[1590]: 2026-01-28 00:58:35.571 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.597356 containerd[1590]: 2026-01-28 00:58:35.573 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jxgdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca219588-36d1-44cb-b7f0-f29129c91014", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be", Pod:"csi-node-driver-jxgdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c7f691e372", MAC:"b6:a5:90:31:a8:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:35.597356 containerd[1590]: 2026-01-28 00:58:35.590 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be" Namespace="calico-system" Pod="csi-node-driver-jxgdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:35.627958 containerd[1590]: time="2026-01-28T00:58:35.627530563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:35.627958 containerd[1590]: time="2026-01-28T00:58:35.627679925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:35.627958 containerd[1590]: time="2026-01-28T00:58:35.627866949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:35.628297 containerd[1590]: time="2026-01-28T00:58:35.628018265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:35.672099 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:35.697941 containerd[1590]: time="2026-01-28T00:58:35.697830317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxgdl,Uid:ca219588-36d1-44cb-b7f0-f29129c91014,Namespace:calico-system,Attempt:1,} returns sandbox id \"6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be\"" Jan 28 00:58:35.702187 containerd[1590]: time="2026-01-28T00:58:35.700556847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:58:35.767735 containerd[1590]: time="2026-01-28T00:58:35.767605219Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:35.769603 containerd[1590]: time="2026-01-28T00:58:35.769492812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:58:35.769815 containerd[1590]: time="2026-01-28T00:58:35.769543754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:58:35.770108 kubelet[2760]: E0128 00:58:35.769995 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:58:35.770108 kubelet[2760]: E0128 00:58:35.770088 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:58:35.770385 kubelet[2760]: E0128 00:58:35.770303 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:35.773613 containerd[1590]: time="2026-01-28T00:58:35.773567610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:58:35.837796 containerd[1590]: time="2026-01-28T00:58:35.837405856Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:35.839395 containerd[1590]: time="2026-01-28T00:58:35.839276587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:58:35.839594 containerd[1590]: time="2026-01-28T00:58:35.839364510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:58:35.839812 kubelet[2760]: E0128 00:58:35.839743 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:58:35.840526 kubelet[2760]: E0128 00:58:35.839825 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:58:35.840526 kubelet[2760]: E0128 00:58:35.840106 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:35.841735 kubelet[2760]: E0128 00:58:35.841602 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:35.862116 systemd-networkd[1249]: cali0cf5357259c: Gained IPv6LL Jan 28 00:58:36.144541 kubelet[2760]: E0128 00:58:36.144309 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:58:36.145457 kubelet[2760]: E0128 00:58:36.145343 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:37.145843 kubelet[2760]: E0128 00:58:37.145779 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:37.280941 containerd[1590]: time="2026-01-28T00:58:37.279666143Z" level=info msg="StopPodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\"" Jan 28 00:58:37.280941 containerd[1590]: time="2026-01-28T00:58:37.279665821Z" level=info msg="StopPodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\"" Jan 28 00:58:37.280941 containerd[1590]: time="2026-01-28T00:58:37.280853316Z" level=info msg="StopPodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\"" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.428 [INFO][4610] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.429 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" iface="eth0" netns="/var/run/netns/cni-9eaf6e18-3999-8c49-b08b-0a3ccd99ccae" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.429 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" iface="eth0" netns="/var/run/netns/cni-9eaf6e18-3999-8c49-b08b-0a3ccd99ccae" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.429 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" iface="eth0" netns="/var/run/netns/cni-9eaf6e18-3999-8c49-b08b-0a3ccd99ccae" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.429 [INFO][4610] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.429 [INFO][4610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.471 [INFO][4637] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.471 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.472 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.482 [WARNING][4637] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.482 [INFO][4637] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.485 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:37.489581 containerd[1590]: 2026-01-28 00:58:37.487 [INFO][4610] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:37.490261 containerd[1590]: time="2026-01-28T00:58:37.489892718Z" level=info msg="TearDown network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" successfully" Jan 28 00:58:37.490261 containerd[1590]: time="2026-01-28T00:58:37.489924720Z" level=info msg="StopPodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" returns successfully" Jan 28 00:58:37.491655 containerd[1590]: time="2026-01-28T00:58:37.491584308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-4dgpt,Uid:e87ecd7f-76fb-416b-97ac-bcf8061e4f34,Namespace:calico-apiserver,Attempt:1,}" Jan 28 00:58:37.494868 systemd[1]: run-netns-cni\x2d9eaf6e18\x2d3999\x2d8c49\x2db08b\x2d0a3ccd99ccae.mount: Deactivated successfully. Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.425 [INFO][4615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.426 [INFO][4615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" iface="eth0" netns="/var/run/netns/cni-8a1e300c-38c1-8972-7ba4-c9a67f5c0403" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.428 [INFO][4615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" iface="eth0" netns="/var/run/netns/cni-8a1e300c-38c1-8972-7ba4-c9a67f5c0403" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.433 [INFO][4615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" iface="eth0" netns="/var/run/netns/cni-8a1e300c-38c1-8972-7ba4-c9a67f5c0403" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.434 [INFO][4615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.434 [INFO][4615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.472 [INFO][4640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.472 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.485 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.496 [WARNING][4640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.496 [INFO][4640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.499 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:37.509964 containerd[1590]: 2026-01-28 00:58:37.505 [INFO][4615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:37.510566 containerd[1590]: time="2026-01-28T00:58:37.510239123Z" level=info msg="TearDown network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" successfully" Jan 28 00:58:37.510566 containerd[1590]: time="2026-01-28T00:58:37.510284178Z" level=info msg="StopPodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" returns successfully" Jan 28 00:58:37.511587 kubelet[2760]: E0128 00:58:37.511501 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:37.512679 containerd[1590]: time="2026-01-28T00:58:37.512516490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v27jx,Uid:8a1c4cdd-fd99-43db-b45c-fa57bb001ab8,Namespace:kube-system,Attempt:1,}" Jan 28 00:58:37.516657 systemd[1]: run-netns-cni\x2d8a1e300c\x2d38c1\x2d8972\x2d7ba4\x2dc9a67f5c0403.mount: Deactivated successfully. Jan 28 00:58:37.527530 systemd-networkd[1249]: cali0c7f691e372: Gained IPv6LL Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.431 [INFO][4609] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.432 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" iface="eth0" netns="/var/run/netns/cni-a163818d-fa80-072a-6163-1d9406f575b3" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.433 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" iface="eth0" netns="/var/run/netns/cni-a163818d-fa80-072a-6163-1d9406f575b3" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.435 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" iface="eth0" netns="/var/run/netns/cni-a163818d-fa80-072a-6163-1d9406f575b3" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.436 [INFO][4609] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.436 [INFO][4609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.480 [INFO][4650] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.480 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.499 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.512 [WARNING][4650] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.514 [INFO][4650] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.520 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:37.534773 containerd[1590]: 2026-01-28 00:58:37.524 [INFO][4609] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:37.534773 containerd[1590]: time="2026-01-28T00:58:37.531416968Z" level=info msg="TearDown network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" successfully" Jan 28 00:58:37.534773 containerd[1590]: time="2026-01-28T00:58:37.531572361Z" level=info msg="StopPodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" returns successfully" Jan 28 00:58:37.534773 containerd[1590]: time="2026-01-28T00:58:37.533570401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-df4fc,Uid:b6eec03e-d69e-4a77-be85-879339debc77,Namespace:calico-system,Attempt:1,}" Jan 28 00:58:37.534969 systemd[1]: run-netns-cni\x2da163818d\x2dfa80\x2d072a\x2d6163\x2d1d9406f575b3.mount: Deactivated successfully. Jan 28 00:58:37.721317 systemd-networkd[1249]: cali7198fd0cf14: Link UP Jan 28 00:58:37.730465 systemd-networkd[1249]: cali7198fd0cf14: Gained carrier Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.598 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0 calico-apiserver-d58fb4688- calico-apiserver e87ecd7f-76fb-416b-97ac-bcf8061e4f34 986 0 2026-01-28 00:58:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d58fb4688 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d58fb4688-4dgpt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7198fd0cf14 [] [] }} ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.598 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.651 [INFO][4702] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" HandleID="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.651 [INFO][4702] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" HandleID="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003436d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d58fb4688-4dgpt", "timestamp":"2026-01-28 00:58:37.651393678 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.652 [INFO][4702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.652 [INFO][4702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.652 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.662 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.669 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.679 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.683 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.686 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.686 [INFO][4702] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.689 [INFO][4702] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63 Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.694 [INFO][4702] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.706 [INFO][4702] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.706 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" host="localhost" Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.706 [INFO][4702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:37.754870 containerd[1590]: 2026-01-28 00:58:37.706 [INFO][4702] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" HandleID="k8s-pod-network.51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.755524 containerd[1590]: 2026-01-28 00:58:37.709 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"e87ecd7f-76fb-416b-97ac-bcf8061e4f34", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d58fb4688-4dgpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7198fd0cf14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:37.755524 containerd[1590]: 2026-01-28 00:58:37.710 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.755524 containerd[1590]: 2026-01-28 00:58:37.710 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7198fd0cf14 ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.755524 containerd[1590]: 2026-01-28 00:58:37.730 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.755524 containerd[1590]: 2026-01-28 00:58:37.731 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"e87ecd7f-76fb-416b-97ac-bcf8061e4f34", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63", Pod:"calico-apiserver-d58fb4688-4dgpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7198fd0cf14", MAC:"56:91:57:a5:d5:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:37.755524 containerd[1590]: 2026-01-28 00:58:37.747 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63" Namespace="calico-apiserver" Pod="calico-apiserver-d58fb4688-4dgpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:37.826658 containerd[1590]: time="2026-01-28T00:58:37.823906837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:37.826658 containerd[1590]: time="2026-01-28T00:58:37.825942968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:37.826658 containerd[1590]: time="2026-01-28T00:58:37.825968718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:37.826658 containerd[1590]: time="2026-01-28T00:58:37.826135844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:37.855308 systemd-networkd[1249]: cali2d327296204: Link UP Jan 28 00:58:37.858895 systemd-networkd[1249]: cali2d327296204: Gained carrier Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.635 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--v27jx-eth0 coredns-668d6bf9bc- kube-system 8a1c4cdd-fd99-43db-b45c-fa57bb001ab8 984 0 2026-01-28 00:57:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-v27jx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d327296204 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.635 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.681 [INFO][4714] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" HandleID="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.682 [INFO][4714] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" HandleID="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049d050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-v27jx", "timestamp":"2026-01-28 00:58:37.681778515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.682 [INFO][4714] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.706 [INFO][4714] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.707 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.766 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.779 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.787 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.790 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.796 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.796 [INFO][4714] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.802 [INFO][4714] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.812 [INFO][4714] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.832 [INFO][4714] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.832 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" host="localhost" Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.832 [INFO][4714] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:37.889432 containerd[1590]: 2026-01-28 00:58:37.832 [INFO][4714] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" HandleID="k8s-pod-network.f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.892095 containerd[1590]: 2026-01-28 00:58:37.843 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v27jx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-v27jx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d327296204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:37.892095 containerd[1590]: 2026-01-28 00:58:37.843 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.892095 containerd[1590]: 2026-01-28 00:58:37.843 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d327296204 ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.892095 containerd[1590]: 2026-01-28 00:58:37.860 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.892095 containerd[1590]: 2026-01-28 00:58:37.865 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v27jx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef", Pod:"coredns-668d6bf9bc-v27jx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d327296204", MAC:"46:92:f8:a4:68:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:37.892095 containerd[1590]: 2026-01-28 00:58:37.884 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef" Namespace="kube-system" Pod="coredns-668d6bf9bc-v27jx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:37.919937 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:37.969659 containerd[1590]: time="2026-01-28T00:58:37.968390712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:37.969659 containerd[1590]: time="2026-01-28T00:58:37.968538160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:37.969659 containerd[1590]: time="2026-01-28T00:58:37.968561805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:37.969659 containerd[1590]: time="2026-01-28T00:58:37.968836133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:37.986310 systemd-networkd[1249]: calie041cecf1f4: Link UP Jan 28 00:58:37.987900 systemd-networkd[1249]: calie041cecf1f4: Gained carrier Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.637 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--df4fc-eth0 goldmane-666569f655- calico-system b6eec03e-d69e-4a77-be85-879339debc77 985 0 2026-01-28 00:58:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-df4fc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie041cecf1f4 [] [] }} ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.637 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.681 [INFO][4716] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" HandleID="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.682 [INFO][4716] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" HandleID="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001315d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-df4fc", "timestamp":"2026-01-28 00:58:37.68175739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.682 [INFO][4716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.832 [INFO][4716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.832 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.865 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.883 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.918 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.933 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.940 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.940 [INFO][4716] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.945 [INFO][4716] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.954 [INFO][4716] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.968 [INFO][4716] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.968 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" host="localhost" Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.968 [INFO][4716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:38.036547 containerd[1590]: 2026-01-28 00:58:37.968 [INFO][4716] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" HandleID="k8s-pod-network.c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.038949 containerd[1590]: 2026-01-28 00:58:37.977 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--df4fc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b6eec03e-d69e-4a77-be85-879339debc77", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-df4fc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie041cecf1f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:38.038949 containerd[1590]: 2026-01-28 00:58:37.978 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.038949 containerd[1590]: 2026-01-28 00:58:37.978 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie041cecf1f4 ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.038949 containerd[1590]: 2026-01-28 00:58:37.989 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.038949 containerd[1590]: 2026-01-28 00:58:37.993 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--df4fc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b6eec03e-d69e-4a77-be85-879339debc77", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e", Pod:"goldmane-666569f655-df4fc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie041cecf1f4", MAC:"da:a2:89:bf:c2:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:38.038949 containerd[1590]: 2026-01-28 00:58:38.020 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e" Namespace="calico-system" Pod="goldmane-666569f655-df4fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:38.051656 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:38.070973 containerd[1590]: time="2026-01-28T00:58:38.070759987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d58fb4688-4dgpt,Uid:e87ecd7f-76fb-416b-97ac-bcf8061e4f34,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63\"" Jan 28 00:58:38.077899 containerd[1590]: time="2026-01-28T00:58:38.077855703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:58:38.129364 containerd[1590]: time="2026-01-28T00:58:38.129313378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v27jx,Uid:8a1c4cdd-fd99-43db-b45c-fa57bb001ab8,Namespace:kube-system,Attempt:1,} returns sandbox id \"f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef\"" Jan 28 00:58:38.132284 containerd[1590]: time="2026-01-28T00:58:38.129389474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:38.132284 containerd[1590]: time="2026-01-28T00:58:38.129471870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:38.132284 containerd[1590]: time="2026-01-28T00:58:38.129492509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:38.132284 containerd[1590]: time="2026-01-28T00:58:38.129624187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:38.133086 kubelet[2760]: E0128 00:58:38.132613 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:38.138155 containerd[1590]: time="2026-01-28T00:58:38.137971788Z" level=info msg="CreateContainer within sandbox \"f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:58:38.177308 containerd[1590]: time="2026-01-28T00:58:38.177187598Z" level=info msg="CreateContainer within sandbox \"f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"358f77f2bdc4a3ead61477ae494a5b43124876e934ce586fd15a76a3be3781e6\"" Jan 28 00:58:38.181997 containerd[1590]: time="2026-01-28T00:58:38.181782306Z" level=info msg="StartContainer for \"358f77f2bdc4a3ead61477ae494a5b43124876e934ce586fd15a76a3be3781e6\"" Jan 28 00:58:38.192755 containerd[1590]: time="2026-01-28T00:58:38.190968614Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:38.192755 containerd[1590]: time="2026-01-28T00:58:38.192419657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:58:38.192755 containerd[1590]: time="2026-01-28T00:58:38.192499869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:38.192947 kubelet[2760]: E0128 00:58:38.192675 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:38.192947 kubelet[2760]: E0128 00:58:38.192777 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:38.193576 kubelet[2760]: E0128 00:58:38.193289 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjfl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-4dgpt_calico-apiserver(e87ecd7f-76fb-416b-97ac-bcf8061e4f34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:38.195919 kubelet[2760]: E0128 00:58:38.194513 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:58:38.224144 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:38.266095 containerd[1590]: time="2026-01-28T00:58:38.265960477Z" level=info msg="StopPodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\"" Jan 28 00:58:38.308755 containerd[1590]: time="2026-01-28T00:58:38.308223650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-df4fc,Uid:b6eec03e-d69e-4a77-be85-879339debc77,Namespace:calico-system,Attempt:1,} returns sandbox id \"c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e\"" Jan 28 00:58:38.314862 containerd[1590]: time="2026-01-28T00:58:38.314795636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:58:38.325356 containerd[1590]: time="2026-01-28T00:58:38.325181033Z" level=info msg="StartContainer for \"358f77f2bdc4a3ead61477ae494a5b43124876e934ce586fd15a76a3be3781e6\" returns successfully" Jan 28 00:58:38.413771 containerd[1590]: time="2026-01-28T00:58:38.412829717Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:38.415662 containerd[1590]: time="2026-01-28T00:58:38.415595715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:58:38.416350 containerd[1590]: time="2026-01-28T00:58:38.416311588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:38.417048 kubelet[2760]: E0128 00:58:38.416902 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:58:38.417873 kubelet[2760]: E0128 00:58:38.417070 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:58:38.417873 kubelet[2760]: E0128 00:58:38.417549 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9skff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-df4fc_calico-system(b6eec03e-d69e-4a77-be85-879339debc77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:38.420019 kubelet[2760]: E0128 00:58:38.418878 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.431 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.431 [INFO][4915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" iface="eth0" netns="/var/run/netns/cni-c71b9e6f-9a6b-8dc1-7e8e-1ba68ef1c57d" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.432 [INFO][4915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" iface="eth0" netns="/var/run/netns/cni-c71b9e6f-9a6b-8dc1-7e8e-1ba68ef1c57d" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.443 [INFO][4915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" iface="eth0" netns="/var/run/netns/cni-c71b9e6f-9a6b-8dc1-7e8e-1ba68ef1c57d" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.445 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.445 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.544 [INFO][4940] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.546 [INFO][4940] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.546 [INFO][4940] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.555 [WARNING][4940] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.555 [INFO][4940] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.558 [INFO][4940] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:38.565650 containerd[1590]: 2026-01-28 00:58:38.561 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:38.568078 containerd[1590]: time="2026-01-28T00:58:38.566797958Z" level=info msg="TearDown network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" successfully" Jan 28 00:58:38.568078 containerd[1590]: time="2026-01-28T00:58:38.566837103Z" level=info msg="StopPodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" returns successfully" Jan 28 00:58:38.569351 kubelet[2760]: E0128 00:58:38.568421 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:38.569916 containerd[1590]: time="2026-01-28T00:58:38.569883843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms6pj,Uid:93e9e1d3-02e7-468a-9d2c-3161f279d51b,Namespace:kube-system,Attempt:1,}" Jan 28 00:58:38.587587 systemd[1]: run-netns-cni\x2dc71b9e6f\x2d9a6b\x2d8dc1\x2d7e8e\x2d1ba68ef1c57d.mount: Deactivated successfully. Jan 28 00:58:38.844279 systemd-networkd[1249]: cali153283c780b: Link UP Jan 28 00:58:38.847929 systemd-networkd[1249]: cali153283c780b: Gained carrier Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.703 [INFO][4951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0 coredns-668d6bf9bc- kube-system 93e9e1d3-02e7-468a-9d2c-3161f279d51b 1016 0 2026-01-28 00:57:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ms6pj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali153283c780b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.703 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.751 [INFO][4967] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" HandleID="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.751 [INFO][4967] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" HandleID="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7a50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ms6pj", "timestamp":"2026-01-28 00:58:38.751528511 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.751 [INFO][4967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.751 [INFO][4967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.751 [INFO][4967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.768 [INFO][4967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.780 [INFO][4967] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.791 [INFO][4967] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.796 [INFO][4967] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.801 [INFO][4967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.801 [INFO][4967] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.805 [INFO][4967] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.813 [INFO][4967] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.831 [INFO][4967] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.831 [INFO][4967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" host="localhost" Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.831 [INFO][4967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:38.870121 containerd[1590]: 2026-01-28 00:58:38.831 [INFO][4967] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" HandleID="k8s-pod-network.5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.870859 containerd[1590]: 2026-01-28 00:58:38.836 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"93e9e1d3-02e7-468a-9d2c-3161f279d51b", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ms6pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali153283c780b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:38.870859 containerd[1590]: 2026-01-28 00:58:38.836 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.870859 containerd[1590]: 2026-01-28 00:58:38.836 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali153283c780b ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.870859 containerd[1590]: 2026-01-28 00:58:38.846 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.870859 containerd[1590]: 2026-01-28 00:58:38.847 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"93e9e1d3-02e7-468a-9d2c-3161f279d51b", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db", Pod:"coredns-668d6bf9bc-ms6pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali153283c780b", MAC:"82:1d:af:7b:69:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:38.870859 containerd[1590]: 2026-01-28 00:58:38.865 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db" Namespace="kube-system" Pod="coredns-668d6bf9bc-ms6pj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:38.909379 containerd[1590]: time="2026-01-28T00:58:38.908861634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:38.909379 containerd[1590]: time="2026-01-28T00:58:38.908936153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:38.909379 containerd[1590]: time="2026-01-28T00:58:38.908959237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:38.909379 containerd[1590]: time="2026-01-28T00:58:38.909182108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:38.955628 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:38.998866 systemd-networkd[1249]: cali7198fd0cf14: Gained IPv6LL Jan 28 00:58:39.011667 containerd[1590]: time="2026-01-28T00:58:39.011540597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ms6pj,Uid:93e9e1d3-02e7-468a-9d2c-3161f279d51b,Namespace:kube-system,Attempt:1,} returns sandbox id \"5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db\"" Jan 28 00:58:39.012907 kubelet[2760]: E0128 00:58:39.012839 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:39.015953 containerd[1590]: time="2026-01-28T00:58:39.015878937Z" level=info msg="CreateContainer within sandbox \"5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:58:39.037432 containerd[1590]: time="2026-01-28T00:58:39.037325046Z" level=info msg="CreateContainer within sandbox \"5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4624b90bf85558ab3803c4685ced6436e4c7cef0c582c77f88746b60447e8a03\"" Jan 28 00:58:39.038634 containerd[1590]: time="2026-01-28T00:58:39.038504491Z" level=info msg="StartContainer for \"4624b90bf85558ab3803c4685ced6436e4c7cef0c582c77f88746b60447e8a03\"" Jan 28 00:58:39.145094 containerd[1590]: time="2026-01-28T00:58:39.144921997Z" level=info msg="StartContainer for \"4624b90bf85558ab3803c4685ced6436e4c7cef0c582c77f88746b60447e8a03\" returns successfully" Jan 28 00:58:39.165102 kubelet[2760]: E0128 00:58:39.163913 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:58:39.214104 kubelet[2760]: E0128 00:58:39.213555 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:39.218566 kubelet[2760]: E0128 00:58:39.217038 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:39.218566 kubelet[2760]: E0128 00:58:39.218077 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:58:39.239874 kubelet[2760]: I0128 00:58:39.239102 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v27jx" podStartSLOduration=45.239076129 podStartE2EDuration="45.239076129s" podCreationTimestamp="2026-01-28 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:58:39.236379953 +0000 UTC m=+50.214531606" watchObservedRunningTime="2026-01-28 00:58:39.239076129 +0000 UTC m=+50.217227751" Jan 28 00:58:39.254012 systemd-networkd[1249]: cali2d327296204: Gained IPv6LL Jan 28 00:58:39.260934 kubelet[2760]: I0128 00:58:39.260848 2760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ms6pj" podStartSLOduration=45.260829773 podStartE2EDuration="45.260829773s" podCreationTimestamp="2026-01-28 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:58:39.259425859 +0000 UTC m=+50.237577483" watchObservedRunningTime="2026-01-28 00:58:39.260829773 +0000 UTC m=+50.238981396" Jan 28 00:58:39.277579 containerd[1590]: time="2026-01-28T00:58:39.277420814Z" level=info msg="StopPodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\"" Jan 28 00:58:39.319009 systemd-networkd[1249]: calie041cecf1f4: Gained IPv6LL Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.448 [INFO][5090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.450 [INFO][5090] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" iface="eth0" netns="/var/run/netns/cni-afe750fa-6cbc-a22d-189b-ccbd398d0c30" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.451 [INFO][5090] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" iface="eth0" netns="/var/run/netns/cni-afe750fa-6cbc-a22d-189b-ccbd398d0c30" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.452 [INFO][5090] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" iface="eth0" netns="/var/run/netns/cni-afe750fa-6cbc-a22d-189b-ccbd398d0c30" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.452 [INFO][5090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.452 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.492 [INFO][5098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.492 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.492 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.503 [WARNING][5098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.503 [INFO][5098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.506 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:39.513054 containerd[1590]: 2026-01-28 00:58:39.509 [INFO][5090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:39.513941 containerd[1590]: time="2026-01-28T00:58:39.513339544Z" level=info msg="TearDown network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" successfully" Jan 28 00:58:39.513941 containerd[1590]: time="2026-01-28T00:58:39.513380371Z" level=info msg="StopPodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" returns successfully" Jan 28 00:58:39.514582 containerd[1590]: time="2026-01-28T00:58:39.514532499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8fb4bd5-kz6kn,Uid:9d40997d-8269-410f-a37f-77eca7302f00,Namespace:calico-system,Attempt:1,}" Jan 28 00:58:39.516666 systemd[1]: run-netns-cni\x2dafe750fa\x2d6cbc\x2da22d\x2d189b\x2dccbd398d0c30.mount: Deactivated successfully. Jan 28 00:58:39.760972 systemd-networkd[1249]: calieb096c6aeae: Link UP Jan 28 00:58:39.762862 systemd-networkd[1249]: calieb096c6aeae: Gained carrier Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.590 [INFO][5106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0 calico-kube-controllers-55b8fb4bd5- calico-system 9d40997d-8269-410f-a37f-77eca7302f00 1046 0 2026-01-28 00:58:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55b8fb4bd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55b8fb4bd5-kz6kn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieb096c6aeae [] [] }} ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.590 [INFO][5106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.627 [INFO][5120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" HandleID="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.628 [INFO][5120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" HandleID="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55b8fb4bd5-kz6kn", "timestamp":"2026-01-28 00:58:39.627946865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.628 [INFO][5120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.628 [INFO][5120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.628 [INFO][5120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.636 [INFO][5120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.646 [INFO][5120] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.704 [INFO][5120] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.711 [INFO][5120] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.717 [INFO][5120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.717 [INFO][5120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.733 [INFO][5120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.739 [INFO][5120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.748 [INFO][5120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.748 [INFO][5120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" host="localhost" Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.748 [INFO][5120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:39.783504 containerd[1590]: 2026-01-28 00:58:39.748 [INFO][5120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" HandleID="k8s-pod-network.d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.785270 containerd[1590]: 2026-01-28 00:58:39.754 [INFO][5106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0", GenerateName:"calico-kube-controllers-55b8fb4bd5-", Namespace:"calico-system", SelfLink:"", UID:"9d40997d-8269-410f-a37f-77eca7302f00", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b8fb4bd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55b8fb4bd5-kz6kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb096c6aeae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:39.785270 containerd[1590]: 2026-01-28 00:58:39.754 [INFO][5106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.785270 containerd[1590]: 2026-01-28 00:58:39.754 [INFO][5106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb096c6aeae ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.785270 containerd[1590]: 2026-01-28 00:58:39.762 [INFO][5106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.785270 containerd[1590]: 2026-01-28 00:58:39.763 [INFO][5106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0", GenerateName:"calico-kube-controllers-55b8fb4bd5-", Namespace:"calico-system", SelfLink:"", UID:"9d40997d-8269-410f-a37f-77eca7302f00", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b8fb4bd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb", Pod:"calico-kube-controllers-55b8fb4bd5-kz6kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb096c6aeae", MAC:"62:a5:e0:b4:bf:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:39.785270 containerd[1590]: 2026-01-28 00:58:39.778 [INFO][5106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb" Namespace="calico-system" Pod="calico-kube-controllers-55b8fb4bd5-kz6kn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:39.810780 containerd[1590]: time="2026-01-28T00:58:39.809345934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:58:39.810780 containerd[1590]: time="2026-01-28T00:58:39.810654637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:58:39.810780 containerd[1590]: time="2026-01-28T00:58:39.810667071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:39.811022 containerd[1590]: time="2026-01-28T00:58:39.810875515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:58:39.853194 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 00:58:39.892873 containerd[1590]: time="2026-01-28T00:58:39.892813269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55b8fb4bd5-kz6kn,Uid:9d40997d-8269-410f-a37f-77eca7302f00,Namespace:calico-system,Attempt:1,} returns sandbox id \"d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb\"" Jan 28 00:58:39.894670 containerd[1590]: time="2026-01-28T00:58:39.894627237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:58:39.976557 containerd[1590]: time="2026-01-28T00:58:39.976391466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:39.978217 containerd[1590]: time="2026-01-28T00:58:39.978102890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:58:39.978387 containerd[1590]: time="2026-01-28T00:58:39.978229233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:58:39.978604 kubelet[2760]: E0128 00:58:39.978519 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:58:39.978604 kubelet[2760]: E0128 00:58:39.978593 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:58:39.978858 kubelet[2760]: E0128 00:58:39.978782 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7j2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55b8fb4bd5-kz6kn_calico-system(9d40997d-8269-410f-a37f-77eca7302f00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:39.980212 kubelet[2760]: E0128 00:58:39.980036 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:58:40.221989 kubelet[2760]: E0128 00:58:40.221607 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:40.224813 kubelet[2760]: E0128 00:58:40.222139 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:40.224813 kubelet[2760]: E0128 00:58:40.224680 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:58:40.232638 kubelet[2760]: E0128 00:58:40.232464 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:58:40.854010 systemd-networkd[1249]: cali153283c780b: Gained IPv6LL Jan 28 00:58:41.226420 kubelet[2760]: E0128 00:58:41.225256 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:41.226420 kubelet[2760]: E0128 00:58:41.225335 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:41.226420 kubelet[2760]: E0128 00:58:41.226060 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:58:41.238073 systemd-networkd[1249]: calieb096c6aeae: Gained IPv6LL Jan 28 00:58:45.915578 containerd[1590]: time="2026-01-28T00:58:45.915506989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:58:45.974220 kubelet[2760]: I0128 00:58:45.973524 2760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 00:58:45.975056 kubelet[2760]: E0128 00:58:45.974663 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:46.070271 containerd[1590]: time="2026-01-28T00:58:46.070201047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:46.076756 containerd[1590]: time="2026-01-28T00:58:46.071787461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:58:46.076756 containerd[1590]: time="2026-01-28T00:58:46.071996425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:58:46.076937 kubelet[2760]: E0128 00:58:46.074765 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:58:46.076937 kubelet[2760]: E0128 00:58:46.075030 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:58:46.077795 kubelet[2760]: E0128 00:58:46.077732 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:720b431776f9430c801351a09b535fb1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:46.089827 containerd[1590]: time="2026-01-28T00:58:46.089350613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:58:46.258144 containerd[1590]: time="2026-01-28T00:58:46.251278170Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:46.260922 containerd[1590]: time="2026-01-28T00:58:46.260548793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:58:46.261367 containerd[1590]: time="2026-01-28T00:58:46.260678866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:58:46.261763 kubelet[2760]: E0128 00:58:46.261615 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:58:46.262048 kubelet[2760]: E0128 00:58:46.261994 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:58:46.262807 kubelet[2760]: E0128 00:58:46.262770 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:46.265348 kubelet[2760]: E0128 00:58:46.264981 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:58:46.996909 kubelet[2760]: E0128 00:58:46.996351 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:58:49.299746 containerd[1590]: time="2026-01-28T00:58:49.299496410Z" level=info msg="StopPodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\"" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.508 [WARNING][5248] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"93e9e1d3-02e7-468a-9d2c-3161f279d51b", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db", Pod:"coredns-668d6bf9bc-ms6pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali153283c780b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.510 [INFO][5248] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.510 [INFO][5248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" iface="eth0" netns="" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.510 [INFO][5248] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.510 [INFO][5248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.720 [INFO][5257] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.722 [INFO][5257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.722 [INFO][5257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.738 [WARNING][5257] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.739 [INFO][5257] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.742 [INFO][5257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:49.760632 containerd[1590]: 2026-01-28 00:58:49.745 [INFO][5248] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:49.769868 containerd[1590]: time="2026-01-28T00:58:49.764869349Z" level=info msg="TearDown network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" successfully" Jan 28 00:58:49.769868 containerd[1590]: time="2026-01-28T00:58:49.765490539Z" level=info msg="StopPodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" returns successfully" Jan 28 00:58:49.806399 containerd[1590]: time="2026-01-28T00:58:49.802844516Z" level=info msg="RemovePodSandbox for \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\"" Jan 28 00:58:49.808960 containerd[1590]: time="2026-01-28T00:58:49.807198846Z" level=info msg="Forcibly stopping sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\"" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.019 [WARNING][5275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"93e9e1d3-02e7-468a-9d2c-3161f279d51b", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d7200a1058f953c31a42f23fb8f1c95017cc525e26c43799d272ed786d802db", Pod:"coredns-668d6bf9bc-ms6pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali153283c780b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.019 [INFO][5275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.019 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" iface="eth0" netns="" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.019 [INFO][5275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.020 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.110 [INFO][5285] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.111 [INFO][5285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.111 [INFO][5285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.119 [WARNING][5285] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.119 [INFO][5285] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" HandleID="k8s-pod-network.d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Workload="localhost-k8s-coredns--668d6bf9bc--ms6pj-eth0" Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.122 [INFO][5285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:50.164494 containerd[1590]: 2026-01-28 00:58:50.155 [INFO][5275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48" Jan 28 00:58:50.166907 containerd[1590]: time="2026-01-28T00:58:50.165353351Z" level=info msg="TearDown network for sandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" successfully" Jan 28 00:58:50.172986 containerd[1590]: time="2026-01-28T00:58:50.172927739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:50.173069 containerd[1590]: time="2026-01-28T00:58:50.173047895Z" level=info msg="RemovePodSandbox \"d1b85e7094df8f512dc3601f278d703740973dc9cbb9e28506c6e74f056dbe48\" returns successfully" Jan 28 00:58:50.216132 containerd[1590]: time="2026-01-28T00:58:50.215872450Z" level=info msg="StopPodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\"" Jan 28 00:58:50.276310 containerd[1590]: time="2026-01-28T00:58:50.276101190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:58:50.472801 containerd[1590]: time="2026-01-28T00:58:50.462124301Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:50.477730 containerd[1590]: time="2026-01-28T00:58:50.475926638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:58:50.477730 containerd[1590]: time="2026-01-28T00:58:50.476082991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:50.480244 kubelet[2760]: E0128 00:58:50.479063 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:50.480244 kubelet[2760]: E0128 00:58:50.479294 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:50.482243 kubelet[2760]: E0128 00:58:50.480823 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cgj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-vm2xw_calico-apiserver(29716958-c780-41f2-b2ff-5fbdb74c3998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:50.482868 containerd[1590]: time="2026-01-28T00:58:50.481864881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:58:50.483293 kubelet[2760]: E0128 00:58:50.482449 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:58:50.901153 containerd[1590]: time="2026-01-28T00:58:50.900804158Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:50.903306 containerd[1590]: time="2026-01-28T00:58:50.903228460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:58:50.903420 containerd[1590]: time="2026-01-28T00:58:50.903354678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:50.903644 kubelet[2760]: E0128 00:58:50.903600 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:50.903949 kubelet[2760]: E0128 00:58:50.903668 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:58:50.904109 kubelet[2760]: E0128 00:58:50.904050 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjfl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-4dgpt_calico-apiserver(e87ecd7f-76fb-416b-97ac-bcf8061e4f34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:50.905942 kubelet[2760]: E0128 00:58:50.905876 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.411 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0", GenerateName:"calico-kube-controllers-55b8fb4bd5-", Namespace:"calico-system", SelfLink:"", UID:"9d40997d-8269-410f-a37f-77eca7302f00", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b8fb4bd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb", Pod:"calico-kube-controllers-55b8fb4bd5-kz6kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb096c6aeae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.473 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.473 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" iface="eth0" netns="" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.473 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.474 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.883 [INFO][5311] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.884 [INFO][5311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.884 [INFO][5311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.893 [WARNING][5311] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.893 [INFO][5311] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.895 [INFO][5311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:50.906092 containerd[1590]: 2026-01-28 00:58:50.900 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:50.907027 containerd[1590]: time="2026-01-28T00:58:50.906257009Z" level=info msg="TearDown network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" successfully" Jan 28 00:58:50.907027 containerd[1590]: time="2026-01-28T00:58:50.906281764Z" level=info msg="StopPodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" returns successfully" Jan 28 00:58:50.908223 containerd[1590]: time="2026-01-28T00:58:50.907769290Z" level=info msg="RemovePodSandbox for \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\"" Jan 28 00:58:50.908223 containerd[1590]: time="2026-01-28T00:58:50.907816418Z" level=info msg="Forcibly stopping sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\"" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.083 [WARNING][5328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0", GenerateName:"calico-kube-controllers-55b8fb4bd5-", Namespace:"calico-system", SelfLink:"", UID:"9d40997d-8269-410f-a37f-77eca7302f00", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55b8fb4bd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d78c95abe7fdcaf122298b19c5f34fb36958848419a00e875c1504ad1bb68ddb", Pod:"calico-kube-controllers-55b8fb4bd5-kz6kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb096c6aeae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.083 [INFO][5328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.084 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" iface="eth0" netns="" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.084 [INFO][5328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.084 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.622 [INFO][5337] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.624 [INFO][5337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.624 [INFO][5337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.646 [WARNING][5337] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.646 [INFO][5337] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" HandleID="k8s-pod-network.cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Workload="localhost-k8s-calico--kube--controllers--55b8fb4bd5--kz6kn-eth0" Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.648 [INFO][5337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:51.658926 containerd[1590]: 2026-01-28 00:58:51.652 [INFO][5328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75" Jan 28 00:58:51.658926 containerd[1590]: time="2026-01-28T00:58:51.657186018Z" level=info msg="TearDown network for sandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" successfully" Jan 28 00:58:51.665486 containerd[1590]: time="2026-01-28T00:58:51.665410764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:51.665737 containerd[1590]: time="2026-01-28T00:58:51.665561107Z" level=info msg="RemovePodSandbox \"cfcbcd7bca89ecbd92e3401041205788337f2c0c49064fb1622f0d70fe351b75\" returns successfully" Jan 28 00:58:51.666660 containerd[1590]: time="2026-01-28T00:58:51.666520765Z" level=info msg="StopPodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\"" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.007 [WARNING][5353] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"e87ecd7f-76fb-416b-97ac-bcf8061e4f34", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63", Pod:"calico-apiserver-d58fb4688-4dgpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7198fd0cf14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.009 [INFO][5353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.010 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" iface="eth0" netns="" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.012 [INFO][5353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.012 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.096 [INFO][5362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.096 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.096 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.105 [WARNING][5362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.105 [INFO][5362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.108 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:52.117410 containerd[1590]: 2026-01-28 00:58:52.113 [INFO][5353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.117410 containerd[1590]: time="2026-01-28T00:58:52.117155259Z" level=info msg="TearDown network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" successfully" Jan 28 00:58:52.117410 containerd[1590]: time="2026-01-28T00:58:52.117187008Z" level=info msg="StopPodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" returns successfully" Jan 28 00:58:52.119657 containerd[1590]: time="2026-01-28T00:58:52.119145372Z" level=info msg="RemovePodSandbox for \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\"" Jan 28 00:58:52.119657 containerd[1590]: time="2026-01-28T00:58:52.119195407Z" level=info msg="Forcibly stopping sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\"" Jan 28 00:58:52.270316 containerd[1590]: time="2026-01-28T00:58:52.269358977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:58:52.376392 containerd[1590]: time="2026-01-28T00:58:52.375793765Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:52.388876 containerd[1590]: time="2026-01-28T00:58:52.388646983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:58:52.389031 containerd[1590]: time="2026-01-28T00:58:52.388923384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:58:52.392133 kubelet[2760]: E0128 00:58:52.391438 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:58:52.392133 kubelet[2760]: E0128 00:58:52.391631 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:58:52.393160 kubelet[2760]: E0128 00:58:52.392659 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:52.394910 containerd[1590]: time="2026-01-28T00:58:52.394744252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.260 [WARNING][5380] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"e87ecd7f-76fb-416b-97ac-bcf8061e4f34", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51c73c81bb2246ba7734a6a021126c3acc04dd919f9a9fa88cfd689aa4ce4e63", Pod:"calico-apiserver-d58fb4688-4dgpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7198fd0cf14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.262 [INFO][5380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.263 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" iface="eth0" netns="" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.263 [INFO][5380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.263 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.379 [INFO][5388] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.380 [INFO][5388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.381 [INFO][5388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.390 [WARNING][5388] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.390 [INFO][5388] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" HandleID="k8s-pod-network.8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Workload="localhost-k8s-calico--apiserver--d58fb4688--4dgpt-eth0" Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.394 [INFO][5388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:52.408546 containerd[1590]: 2026-01-28 00:58:52.399 [INFO][5380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788" Jan 28 00:58:52.409432 containerd[1590]: time="2026-01-28T00:58:52.408552957Z" level=info msg="TearDown network for sandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" successfully" Jan 28 00:58:52.417842 containerd[1590]: time="2026-01-28T00:58:52.417470825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:52.417842 containerd[1590]: time="2026-01-28T00:58:52.417630075Z" level=info msg="RemovePodSandbox \"8a6bfa8afb223ded3e49b3bfc4e88d2ef03ae366cfcadf23eebf8c9c58a57788\" returns successfully" Jan 28 00:58:52.418650 containerd[1590]: time="2026-01-28T00:58:52.418576508Z" level=info msg="StopPodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\"" Jan 28 00:58:52.489117 containerd[1590]: time="2026-01-28T00:58:52.489011305Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:52.496735 containerd[1590]: time="2026-01-28T00:58:52.494281035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:58:52.496735 containerd[1590]: time="2026-01-28T00:58:52.494388667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:58:52.509022 kubelet[2760]: E0128 00:58:52.508940 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:58:52.509199 kubelet[2760]: E0128 00:58:52.509113 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:58:52.509865 kubelet[2760]: E0128 00:58:52.509722 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7j2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55b8fb4bd5-kz6kn_calico-system(9d40997d-8269-410f-a37f-77eca7302f00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:52.517749 containerd[1590]: time="2026-01-28T00:58:52.515999408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:58:52.519215 kubelet[2760]: E0128 00:58:52.519126 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:58:52.946152 containerd[1590]: time="2026-01-28T00:58:52.941753783Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:52.972028 containerd[1590]: time="2026-01-28T00:58:52.971450520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:58:52.972028 containerd[1590]: time="2026-01-28T00:58:52.971737526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:58:52.974170 kubelet[2760]: E0128 00:58:52.972629 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:58:52.974170 kubelet[2760]: E0128 00:58:52.972864 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:58:52.974170 kubelet[2760]: E0128 00:58:52.973224 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:52.977118 kubelet[2760]: E0128 00:58:52.976986 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:52.646 [WARNING][5404] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" WorkloadEndpoint="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:52.646 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:52.646 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" iface="eth0" netns="" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:52.646 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:52.646 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.000 [INFO][5413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.001 [INFO][5413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.002 [INFO][5413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.011 [WARNING][5413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.016 [INFO][5413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.019 [INFO][5413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:53.044890 containerd[1590]: 2026-01-28 00:58:53.024 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.044890 containerd[1590]: time="2026-01-28T00:58:53.044808832Z" level=info msg="TearDown network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" successfully" Jan 28 00:58:53.044890 containerd[1590]: time="2026-01-28T00:58:53.044984964Z" level=info msg="StopPodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" returns successfully" Jan 28 00:58:53.050399 containerd[1590]: time="2026-01-28T00:58:53.050278807Z" level=info msg="RemovePodSandbox for \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\"" Jan 28 00:58:53.050399 containerd[1590]: time="2026-01-28T00:58:53.050357855Z" level=info msg="Forcibly stopping sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\"" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.132 [WARNING][5432] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" WorkloadEndpoint="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.132 [INFO][5432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.133 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" iface="eth0" netns="" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.133 [INFO][5432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.133 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.197 [INFO][5441] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.199 [INFO][5441] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.199 [INFO][5441] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.208 [WARNING][5441] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.208 [INFO][5441] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" HandleID="k8s-pod-network.f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Workload="localhost-k8s-whisker--77ff8946cf--7wfpb-eth0" Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.210 [INFO][5441] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:53.219963 containerd[1590]: 2026-01-28 00:58:53.215 [INFO][5432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0" Jan 28 00:58:53.219963 containerd[1590]: time="2026-01-28T00:58:53.219923802Z" level=info msg="TearDown network for sandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" successfully" Jan 28 00:58:53.259347 containerd[1590]: time="2026-01-28T00:58:53.257099175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:53.316219 containerd[1590]: time="2026-01-28T00:58:53.315181856Z" level=info msg="RemovePodSandbox \"f5a8811ea40bacb0536fdc81e90205d9b7cea75945b4e29bf74d8d01e09c3db0\" returns successfully" Jan 28 00:58:53.406141 containerd[1590]: time="2026-01-28T00:58:53.405274128Z" level=info msg="StopPodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\"" Jan 28 00:58:54.163412 containerd[1590]: time="2026-01-28T00:58:54.160999126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:58:54.406164 containerd[1590]: time="2026-01-28T00:58:54.405479542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:58:54.489102 containerd[1590]: time="2026-01-28T00:58:54.488390565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:58:54.492952 containerd[1590]: time="2026-01-28T00:58:54.492045981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:58:54.495604 kubelet[2760]: E0128 00:58:54.492656 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:58:54.495604 kubelet[2760]: E0128 00:58:54.492787 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:58:54.495604 kubelet[2760]: E0128 00:58:54.493384 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9skff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-df4fc_calico-system(b6eec03e-d69e-4a77-be85-879339debc77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:58:54.495604 kubelet[2760]: E0128 00:58:54.494827 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.379 [WARNING][5459] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v27jx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef", Pod:"coredns-668d6bf9bc-v27jx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d327296204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.380 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.380 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" iface="eth0" netns="" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.380 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.380 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.556 [INFO][5474] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.557 [INFO][5474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.557 [INFO][5474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.569 [WARNING][5474] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.569 [INFO][5474] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.572 [INFO][5474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:54.582047 containerd[1590]: 2026-01-28 00:58:54.577 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:54.601044 containerd[1590]: time="2026-01-28T00:58:54.597839840Z" level=info msg="TearDown network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" successfully" Jan 28 00:58:54.601044 containerd[1590]: time="2026-01-28T00:58:54.598784309Z" level=info msg="StopPodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" returns successfully" Jan 28 00:58:54.715662 containerd[1590]: time="2026-01-28T00:58:54.705191085Z" level=info msg="RemovePodSandbox for \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\"" Jan 28 00:58:54.715662 containerd[1590]: time="2026-01-28T00:58:54.705872087Z" level=info msg="Forcibly stopping sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\"" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.944 [WARNING][5491] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v27jx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a1c4cdd-fd99-43db-b45c-fa57bb001ab8", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f049545d7bc8705b30a0e0753221bf33518e006130ec4b535ef0090e992ceeef", Pod:"coredns-668d6bf9bc-v27jx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d327296204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.945 [INFO][5491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.945 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" iface="eth0" netns="" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.945 [INFO][5491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.945 [INFO][5491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.988 [INFO][5499] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.989 [INFO][5499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:54.989 [INFO][5499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:55.052 [WARNING][5499] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:55.054 [INFO][5499] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" HandleID="k8s-pod-network.8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Workload="localhost-k8s-coredns--668d6bf9bc--v27jx-eth0" Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:55.106 [INFO][5499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:55.124887 containerd[1590]: 2026-01-28 00:58:55.116 [INFO][5491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3" Jan 28 00:58:55.124887 containerd[1590]: time="2026-01-28T00:58:55.124429864Z" level=info msg="TearDown network for sandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" successfully" Jan 28 00:58:55.154025 containerd[1590]: time="2026-01-28T00:58:55.153625315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:55.154025 containerd[1590]: time="2026-01-28T00:58:55.153839127Z" level=info msg="RemovePodSandbox \"8318232b9a0e5d4ed949a7000e5979e1b008a06ea5ea71741e3f80286760eae3\" returns successfully" Jan 28 00:58:55.154973 containerd[1590]: time="2026-01-28T00:58:55.154896003Z" level=info msg="StopPodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\"" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.215 [WARNING][5518] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--df4fc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b6eec03e-d69e-4a77-be85-879339debc77", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e", Pod:"goldmane-666569f655-df4fc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie041cecf1f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.216 [INFO][5518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.216 [INFO][5518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" iface="eth0" netns="" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.216 [INFO][5518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.216 [INFO][5518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.322 [INFO][5526] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.326 [INFO][5526] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.327 [INFO][5526] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.339 [WARNING][5526] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.339 [INFO][5526] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.343 [INFO][5526] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:55.350994 containerd[1590]: 2026-01-28 00:58:55.347 [INFO][5518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.353260 containerd[1590]: time="2026-01-28T00:58:55.352904510Z" level=info msg="TearDown network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" successfully" Jan 28 00:58:55.353260 containerd[1590]: time="2026-01-28T00:58:55.353019667Z" level=info msg="StopPodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" returns successfully" Jan 28 00:58:55.355015 containerd[1590]: time="2026-01-28T00:58:55.354277746Z" level=info msg="RemovePodSandbox for \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\"" Jan 28 00:58:55.355015 containerd[1590]: time="2026-01-28T00:58:55.354317160Z" level=info msg="Forcibly stopping sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\"" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.446 [WARNING][5543] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--df4fc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b6eec03e-d69e-4a77-be85-879339debc77", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2e8077ad01b86a381926351ca879f13331e113e8d8ede429f6111cb40cc870e", Pod:"goldmane-666569f655-df4fc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie041cecf1f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.449 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.449 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" iface="eth0" netns="" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.449 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.449 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.663 [INFO][5551] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.663 [INFO][5551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.663 [INFO][5551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.672 [WARNING][5551] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.672 [INFO][5551] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" HandleID="k8s-pod-network.bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Workload="localhost-k8s-goldmane--666569f655--df4fc-eth0" Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.675 [INFO][5551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:55.685642 containerd[1590]: 2026-01-28 00:58:55.681 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73" Jan 28 00:58:55.688067 containerd[1590]: time="2026-01-28T00:58:55.686822327Z" level=info msg="TearDown network for sandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" successfully" Jan 28 00:58:55.695524 containerd[1590]: time="2026-01-28T00:58:55.695427582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:55.696319 containerd[1590]: time="2026-01-28T00:58:55.695876528Z" level=info msg="RemovePodSandbox \"bd6ed7837a051becdfbf7cd22f019721a8e8fb3de4bdee40f8fc87cc2520ce73\" returns successfully" Jan 28 00:58:55.697415 containerd[1590]: time="2026-01-28T00:58:55.696648801Z" level=info msg="StopPodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\"" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:55.823 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jxgdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca219588-36d1-44cb-b7f0-f29129c91014", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be", Pod:"csi-node-driver-jxgdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c7f691e372", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:55.832 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:55.832 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" iface="eth0" netns="" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:55.832 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:55.832 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:56.097 [INFO][5576] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:56.201 [INFO][5576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:56.204 [INFO][5576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:56.840 [WARNING][5576] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:56.893 [INFO][5576] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:57.571 [INFO][5576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:57.705252 containerd[1590]: 2026-01-28 00:58:57.694 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.706826 containerd[1590]: time="2026-01-28T00:58:57.706321985Z" level=info msg="TearDown network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" successfully" Jan 28 00:58:57.706826 containerd[1590]: time="2026-01-28T00:58:57.706421171Z" level=info msg="StopPodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" returns successfully" Jan 28 00:58:57.724903 containerd[1590]: time="2026-01-28T00:58:57.724589674Z" level=info msg="RemovePodSandbox for \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\"" Jan 28 00:58:57.725747 containerd[1590]: time="2026-01-28T00:58:57.725348213Z" level=info msg="Forcibly stopping sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\"" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.848 [WARNING][5594] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jxgdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca219588-36d1-44cb-b7f0-f29129c91014", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6db59df046718ebfa2698e49c720ab312632fc5eac2432166d1ae44207b929be", Pod:"csi-node-driver-jxgdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0c7f691e372", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.848 [INFO][5594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.848 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" iface="eth0" netns="" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.848 [INFO][5594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.848 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.958 [INFO][5602] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.960 [INFO][5602] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.960 [INFO][5602] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.973 [WARNING][5602] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.973 [INFO][5602] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" HandleID="k8s-pod-network.5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Workload="localhost-k8s-csi--node--driver--jxgdl-eth0" Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.977 [INFO][5602] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:57.990550 containerd[1590]: 2026-01-28 00:58:57.983 [INFO][5594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55" Jan 28 00:58:57.990550 containerd[1590]: time="2026-01-28T00:58:57.989339181Z" level=info msg="TearDown network for sandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" successfully" Jan 28 00:58:57.995772 containerd[1590]: time="2026-01-28T00:58:57.995601260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:57.995864 containerd[1590]: time="2026-01-28T00:58:57.995806236Z" level=info msg="RemovePodSandbox \"5d7131c0a7484252bc229d2a5b031091650e512c5108925b97b43e6e80de9f55\" returns successfully" Jan 28 00:58:57.997100 containerd[1590]: time="2026-01-28T00:58:57.997001486Z" level=info msg="StopPodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\"" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.078 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"29716958-c780-41f2-b2ff-5fbdb74c3998", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7", Pod:"calico-apiserver-d58fb4688-vm2xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cf5357259c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.078 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.079 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" iface="eth0" netns="" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.079 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.079 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.114 [INFO][5632] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.114 [INFO][5632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.114 [INFO][5632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.157 [WARNING][5632] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.157 [INFO][5632] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.162 [INFO][5632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:58.185747 containerd[1590]: 2026-01-28 00:58:58.179 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.185747 containerd[1590]: time="2026-01-28T00:58:58.186199614Z" level=info msg="TearDown network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" successfully" Jan 28 00:58:58.185747 containerd[1590]: time="2026-01-28T00:58:58.186380445Z" level=info msg="StopPodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" returns successfully" Jan 28 00:58:58.192025 containerd[1590]: time="2026-01-28T00:58:58.191936365Z" level=info msg="RemovePodSandbox for \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\"" Jan 28 00:58:58.192413 containerd[1590]: time="2026-01-28T00:58:58.192150988Z" level=info msg="Forcibly stopping sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\"" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.263 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0", GenerateName:"calico-apiserver-d58fb4688-", Namespace:"calico-apiserver", SelfLink:"", UID:"29716958-c780-41f2-b2ff-5fbdb74c3998", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d58fb4688", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39fcbff676bc5b0acce09c84e48d11b92223ab8924ca7493383705a745c61be7", Pod:"calico-apiserver-d58fb4688-vm2xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0cf5357259c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.264 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.264 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" iface="eth0" netns="" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.264 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.264 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.307 [INFO][5656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.307 [INFO][5656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.307 [INFO][5656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.316 [WARNING][5656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.316 [INFO][5656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" HandleID="k8s-pod-network.867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Workload="localhost-k8s-calico--apiserver--d58fb4688--vm2xw-eth0" Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.318 [INFO][5656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:58:58.325935 containerd[1590]: 2026-01-28 00:58:58.322 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba" Jan 28 00:58:58.325935 containerd[1590]: time="2026-01-28T00:58:58.325469949Z" level=info msg="TearDown network for sandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" successfully" Jan 28 00:58:58.330949 containerd[1590]: time="2026-01-28T00:58:58.330844825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 00:58:58.330949 containerd[1590]: time="2026-01-28T00:58:58.330926319Z" level=info msg="RemovePodSandbox \"867118767ef0c8aaa76710ae32555774468ae9d7a7ae95009b154e4f8654e1ba\" returns successfully" Jan 28 00:58:58.660511 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:59438.service - OpenSSH per-connection server daemon (10.0.0.1:59438). Jan 28 00:58:59.189775 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 59438 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:58:59.190439 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:59.204066 systemd-logind[1566]: New session 10 of user core. Jan 28 00:58:59.210104 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:58:59.943320 sshd[5665]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:00.017009 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:59438.service: Deactivated successfully. Jan 28 00:59:00.028177 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:59:00.031331 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:59:00.034101 systemd-logind[1566]: Removed session 10. Jan 28 00:59:00.266605 kubelet[2760]: E0128 00:59:00.265940 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:00.272401 kubelet[2760]: E0128 00:59:00.272058 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:59:01.282561 kubelet[2760]: E0128 00:59:01.282496 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:59:03.100158 kubelet[2760]: E0128 00:59:03.099189 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:59:03.277442 kubelet[2760]: E0128 00:59:03.276872 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:59:04.858263 kubelet[2760]: E0128 00:59:04.857888 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:59:05.618299 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:33642.service - OpenSSH per-connection server daemon (10.0.0.1:33642). Jan 28 00:59:05.735010 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 33642 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:05.763100 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:05.781368 systemd-logind[1566]: New session 11 of user core. Jan 28 00:59:05.789486 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:59:07.210298 kubelet[2760]: E0128 00:59:07.209968 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:59:09.088298 sshd[5686]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:09.396288 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:33642.service: Deactivated successfully. Jan 28 00:59:09.399561 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:59:09.412483 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:59:09.417080 systemd-logind[1566]: Removed session 11. Jan 28 00:59:11.268335 containerd[1590]: time="2026-01-28T00:59:11.267504084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:59:11.357058 containerd[1590]: time="2026-01-28T00:59:11.356354444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:11.359384 containerd[1590]: time="2026-01-28T00:59:11.359240065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:59:11.359384 containerd[1590]: time="2026-01-28T00:59:11.359352326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:59:11.359925 kubelet[2760]: E0128 00:59:11.359613 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:59:11.359925 kubelet[2760]: E0128 00:59:11.359777 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:59:11.361796 kubelet[2760]: E0128 00:59:11.360014 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:720b431776f9430c801351a09b535fb1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:11.364266 containerd[1590]: time="2026-01-28T00:59:11.363644110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:59:11.448178 containerd[1590]: time="2026-01-28T00:59:11.446321533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:11.449473 containerd[1590]: time="2026-01-28T00:59:11.449385410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:59:11.449563 containerd[1590]: time="2026-01-28T00:59:11.449523229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:59:11.450155 kubelet[2760]: E0128 00:59:11.450080 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:59:11.450155 kubelet[2760]: E0128 00:59:11.450144 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:59:11.450436 kubelet[2760]: E0128 00:59:11.450285 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:11.452657 kubelet[2760]: E0128 00:59:11.452387 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:59:13.295772 containerd[1590]: time="2026-01-28T00:59:13.295523819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:59:13.418474 containerd[1590]: time="2026-01-28T00:59:13.418363813Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:13.448787 containerd[1590]: time="2026-01-28T00:59:13.448587280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:59:13.453030 containerd[1590]: time="2026-01-28T00:59:13.448787927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:59:13.453123 kubelet[2760]: E0128 00:59:13.450959 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:59:13.453123 kubelet[2760]: E0128 00:59:13.451042 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:59:13.453123 kubelet[2760]: E0128 00:59:13.451303 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjfl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-4dgpt_calico-apiserver(e87ecd7f-76fb-416b-97ac-bcf8061e4f34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:13.455961 kubelet[2760]: E0128 00:59:13.455648 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:59:14.099119 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:45308.service - OpenSSH per-connection server daemon (10.0.0.1:45308). Jan 28 00:59:14.207458 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 45308 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:14.210763 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:14.223404 systemd-logind[1566]: New session 12 of user core. Jan 28 00:59:14.256439 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:59:14.512978 sshd[5702]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:14.543119 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:45308.service: Deactivated successfully. Jan 28 00:59:14.551983 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:59:14.552317 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:59:14.555542 systemd-logind[1566]: Removed session 12. Jan 28 00:59:15.268450 containerd[1590]: time="2026-01-28T00:59:15.267630644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:59:15.368790 containerd[1590]: time="2026-01-28T00:59:15.368611765Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:15.371144 containerd[1590]: time="2026-01-28T00:59:15.370988528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:59:15.371501 containerd[1590]: time="2026-01-28T00:59:15.371126991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:59:15.371553 kubelet[2760]: E0128 00:59:15.371360 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:59:15.371553 kubelet[2760]: E0128 00:59:15.371439 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:59:15.373133 kubelet[2760]: E0128 00:59:15.371860 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cgj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-vm2xw_calico-apiserver(29716958-c780-41f2-b2ff-5fbdb74c3998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:15.373288 containerd[1590]: time="2026-01-28T00:59:15.372082408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:59:15.373632 kubelet[2760]: E0128 00:59:15.373349 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:59:15.454060 containerd[1590]: time="2026-01-28T00:59:15.453503531Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:15.460866 containerd[1590]: time="2026-01-28T00:59:15.460650837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:59:15.461962 containerd[1590]: time="2026-01-28T00:59:15.461269239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:59:15.462333 kubelet[2760]: E0128 00:59:15.461609 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:59:15.462333 kubelet[2760]: E0128 00:59:15.461810 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:59:15.462831 kubelet[2760]: E0128 00:59:15.462770 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7j2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55b8fb4bd5-kz6kn_calico-system(9d40997d-8269-410f-a37f-77eca7302f00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:15.464571 kubelet[2760]: E0128 00:59:15.464512 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:59:16.264366 kubelet[2760]: E0128 00:59:16.264269 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:16.266511 containerd[1590]: time="2026-01-28T00:59:16.266465513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:59:16.353292 containerd[1590]: time="2026-01-28T00:59:16.352467932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:16.355272 containerd[1590]: time="2026-01-28T00:59:16.355100606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:59:16.355530 containerd[1590]: time="2026-01-28T00:59:16.355192880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:59:16.355791 kubelet[2760]: E0128 00:59:16.355673 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:59:16.358000 kubelet[2760]: E0128 00:59:16.355812 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:59:16.358000 kubelet[2760]: E0128 00:59:16.356042 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:16.360030 containerd[1590]: time="2026-01-28T00:59:16.359626366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:59:16.457640 containerd[1590]: time="2026-01-28T00:59:16.457527026Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:16.460977 containerd[1590]: time="2026-01-28T00:59:16.460754478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:59:16.460977 containerd[1590]: time="2026-01-28T00:59:16.460879622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:59:16.461451 kubelet[2760]: E0128 00:59:16.461308 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:59:16.462424 kubelet[2760]: E0128 00:59:16.461453 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:59:16.462424 kubelet[2760]: E0128 00:59:16.461632 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:16.463296 kubelet[2760]: E0128 00:59:16.463200 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:59:17.264986 kubelet[2760]: E0128 00:59:17.264484 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:18.266537 containerd[1590]: time="2026-01-28T00:59:18.266422809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:59:18.350037 containerd[1590]: time="2026-01-28T00:59:18.349918078Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:18.351862 containerd[1590]: time="2026-01-28T00:59:18.351772970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:59:18.351980 containerd[1590]: time="2026-01-28T00:59:18.351904767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:59:18.353068 kubelet[2760]: E0128 00:59:18.352193 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:59:18.353068 kubelet[2760]: E0128 00:59:18.352265 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:59:18.353068 kubelet[2760]: E0128 00:59:18.352501 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9skff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-df4fc_calico-system(b6eec03e-d69e-4a77-be85-879339debc77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:18.354155 kubelet[2760]: E0128 00:59:18.354100 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:59:19.522285 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:45310.service - OpenSSH per-connection server daemon (10.0.0.1:45310). Jan 28 00:59:19.586459 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 45310 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:19.588624 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:19.605778 systemd-logind[1566]: New session 13 of user core. Jan 28 00:59:19.614517 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:59:19.806429 sshd[5749]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:19.814052 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:45310.service: Deactivated successfully. Jan 28 00:59:19.820186 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:59:19.821246 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:59:19.843523 systemd-logind[1566]: Removed session 13. Jan 28 00:59:21.263373 kubelet[2760]: E0128 00:59:21.263284 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:24.820244 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:54566.service - OpenSSH per-connection server daemon (10.0.0.1:54566). Jan 28 00:59:24.893430 sshd[5767]: Accepted publickey for core from 10.0.0.1 port 54566 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:24.896206 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:24.903226 systemd-logind[1566]: New session 14 of user core. Jan 28 00:59:24.909338 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:59:25.082260 sshd[5767]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:25.089922 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:54566.service: Deactivated successfully. Jan 28 00:59:25.095230 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:59:25.095980 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:59:25.097949 systemd-logind[1566]: Removed session 14. Jan 28 00:59:26.264801 kubelet[2760]: E0128 00:59:26.264634 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:59:26.265600 kubelet[2760]: E0128 00:59:26.264982 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:59:27.275845 kubelet[2760]: E0128 00:59:27.272620 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:59:29.270852 kubelet[2760]: E0128 00:59:29.268901 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:59:29.272570 kubelet[2760]: E0128 00:59:29.272445 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:59:30.101259 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:54578.service - OpenSSH per-connection server daemon (10.0.0.1:54578). Jan 28 00:59:30.170406 sshd[5786]: Accepted publickey for core from 10.0.0.1 port 54578 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:30.173850 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:30.181985 systemd-logind[1566]: New session 15 of user core. Jan 28 00:59:30.190327 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:59:30.353971 sshd[5786]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:30.360013 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:54578.service: Deactivated successfully. Jan 28 00:59:30.365035 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:59:30.365114 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:59:30.370965 systemd-logind[1566]: Removed session 15. Jan 28 00:59:33.265223 kubelet[2760]: E0128 00:59:33.264663 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:59:35.375162 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:40422.service - OpenSSH per-connection server daemon (10.0.0.1:40422). Jan 28 00:59:35.428318 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 40422 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:35.430534 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:35.437179 systemd-logind[1566]: New session 16 of user core. Jan 28 00:59:35.444158 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:59:35.662962 sshd[5803]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:35.671319 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:40430.service - OpenSSH per-connection server daemon (10.0.0.1:40430). Jan 28 00:59:35.682039 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:40422.service: Deactivated successfully. Jan 28 00:59:35.700084 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:59:35.703491 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:59:35.705630 systemd-logind[1566]: Removed session 16. Jan 28 00:59:35.741448 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 40430 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:35.746534 sshd[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:35.753938 systemd-logind[1566]: New session 17 of user core. Jan 28 00:59:35.759065 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:59:35.988041 sshd[5817]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:36.007942 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:40436.service - OpenSSH per-connection server daemon (10.0.0.1:40436). Jan 28 00:59:36.010198 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:40430.service: Deactivated successfully. Jan 28 00:59:36.026523 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:59:36.035299 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:59:36.049851 systemd-logind[1566]: Removed session 17. Jan 28 00:59:36.080199 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 40436 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:36.083192 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:36.091275 systemd-logind[1566]: New session 18 of user core. Jan 28 00:59:36.097376 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:59:36.247383 sshd[5830]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:36.252074 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:40436.service: Deactivated successfully. Jan 28 00:59:36.254965 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:59:36.254967 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:59:36.256749 systemd-logind[1566]: Removed session 18. Jan 28 00:59:38.266376 kubelet[2760]: E0128 00:59:38.266256 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:59:39.265471 kubelet[2760]: E0128 00:59:39.265309 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:59:40.265562 kubelet[2760]: E0128 00:59:40.265102 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:59:41.261207 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:40438.service - OpenSSH per-connection server daemon (10.0.0.1:40438). Jan 28 00:59:41.263988 kubelet[2760]: E0128 00:59:41.263872 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:41.306631 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 40438 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:41.307746 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:41.316123 systemd-logind[1566]: New session 19 of user core. Jan 28 00:59:41.325410 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:59:41.488926 sshd[5851]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:41.494251 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:40438.service: Deactivated successfully. Jan 28 00:59:41.498123 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:59:41.499296 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:59:41.500664 systemd-logind[1566]: Removed session 19. Jan 28 00:59:42.267411 kubelet[2760]: E0128 00:59:42.267326 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:59:43.264073 kubelet[2760]: E0128 00:59:43.263953 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:44.267823 kubelet[2760]: E0128 00:59:44.267633 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 00:59:46.264372 kubelet[2760]: E0128 00:59:46.264166 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:59:46.503015 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:34320.service - OpenSSH per-connection server daemon (10.0.0.1:34320). Jan 28 00:59:46.572160 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 34320 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:46.580986 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:46.605888 systemd-logind[1566]: New session 20 of user core. Jan 28 00:59:46.606988 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:59:46.778806 sshd[5884]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:46.783778 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:34320.service: Deactivated successfully. Jan 28 00:59:46.787150 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:59:46.787291 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:59:46.789350 systemd-logind[1566]: Removed session 20. Jan 28 00:59:49.264121 kubelet[2760]: E0128 00:59:49.264032 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 00:59:51.790079 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:34328.service - OpenSSH per-connection server daemon (10.0.0.1:34328). Jan 28 00:59:51.840657 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 34328 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:51.843607 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:51.851141 systemd-logind[1566]: New session 21 of user core. Jan 28 00:59:51.857551 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:59:52.047079 sshd[5907]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:52.056527 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:59:52.058410 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:34328.service: Deactivated successfully. Jan 28 00:59:52.067585 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:59:52.070890 systemd-logind[1566]: Removed session 21. Jan 28 00:59:52.264366 kubelet[2760]: E0128 00:59:52.264237 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 00:59:53.265207 containerd[1590]: time="2026-01-28T00:59:53.265092062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:59:53.267300 kubelet[2760]: E0128 00:59:53.266488 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 00:59:53.349993 containerd[1590]: time="2026-01-28T00:59:53.349894146Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:53.351673 containerd[1590]: time="2026-01-28T00:59:53.351616343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:59:53.353866 containerd[1590]: time="2026-01-28T00:59:53.351779379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:59:53.353925 kubelet[2760]: E0128 00:59:53.353335 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:59:53.353925 kubelet[2760]: E0128 00:59:53.353407 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:59:53.356805 kubelet[2760]: E0128 00:59:53.356630 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:720b431776f9430c801351a09b535fb1,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:53.361623 containerd[1590]: time="2026-01-28T00:59:53.361554049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:59:53.484910 containerd[1590]: time="2026-01-28T00:59:53.484847466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:53.486588 containerd[1590]: time="2026-01-28T00:59:53.486513269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:59:53.486816 containerd[1590]: time="2026-01-28T00:59:53.486611344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:59:53.486959 kubelet[2760]: E0128 00:59:53.486774 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:59:53.486959 kubelet[2760]: E0128 00:59:53.486821 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:59:53.486959 kubelet[2760]: E0128 00:59:53.486955 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mh4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fd76b96dc-mbjdc_calico-system(f873fe7c-2fd9-4543-9ebd-959fbca499b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:53.488306 kubelet[2760]: E0128 00:59:53.488201 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 00:59:55.267667 containerd[1590]: time="2026-01-28T00:59:55.267549347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:59:55.335217 containerd[1590]: time="2026-01-28T00:59:55.335070781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:55.338238 containerd[1590]: time="2026-01-28T00:59:55.338129683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:59:55.338414 containerd[1590]: time="2026-01-28T00:59:55.338268954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:59:55.338600 kubelet[2760]: E0128 00:59:55.338520 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:59:55.338600 kubelet[2760]: E0128 00:59:55.338587 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:59:55.339229 kubelet[2760]: E0128 00:59:55.338830 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjfl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-4dgpt_calico-apiserver(e87ecd7f-76fb-416b-97ac-bcf8061e4f34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:55.343189 kubelet[2760]: E0128 00:59:55.341393 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 00:59:57.056233 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:43202.service - OpenSSH per-connection server daemon (10.0.0.1:43202). Jan 28 00:59:57.099024 sshd[5929]: Accepted publickey for core from 10.0.0.1 port 43202 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:57.101279 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:57.109512 systemd-logind[1566]: New session 22 of user core. Jan 28 00:59:57.114158 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 00:59:57.292185 sshd[5929]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:57.298429 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:43202.service: Deactivated successfully. Jan 28 00:59:57.303975 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Jan 28 00:59:57.306253 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 00:59:57.308930 systemd-logind[1566]: Removed session 22. Jan 28 00:59:58.270732 kubelet[2760]: E0128 00:59:58.269423 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 00:59:58.275257 containerd[1590]: time="2026-01-28T00:59:58.271572913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:59:58.391998 containerd[1590]: time="2026-01-28T00:59:58.391749081Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 00:59:58.393368 containerd[1590]: time="2026-01-28T00:59:58.393163968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:59:58.393368 containerd[1590]: time="2026-01-28T00:59:58.393248305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:59:58.393770 kubelet[2760]: E0128 00:59:58.393523 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:59:58.393770 kubelet[2760]: E0128 00:59:58.393582 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:59:58.394326 kubelet[2760]: E0128 00:59:58.393763 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7j2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55b8fb4bd5-kz6kn_calico-system(9d40997d-8269-410f-a37f-77eca7302f00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:59:58.395437 kubelet[2760]: E0128 00:59:58.395008 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 01:00:02.310059 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:43214.service - OpenSSH per-connection server daemon (10.0.0.1:43214). Jan 28 01:00:02.361010 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 43214 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:02.363880 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:02.372899 systemd-logind[1566]: New session 23 of user core. Jan 28 01:00:02.381292 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:00:02.566007 sshd[5948]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:02.577898 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:50838.service - OpenSSH per-connection server daemon (10.0.0.1:50838). Jan 28 01:00:02.579029 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:43214.service: Deactivated successfully. Jan 28 01:00:02.586273 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:00:02.590555 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:00:02.594416 systemd-logind[1566]: Removed session 23. Jan 28 01:00:02.629623 sshd[5960]: Accepted publickey for core from 10.0.0.1 port 50838 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:02.632060 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:02.640639 systemd-logind[1566]: New session 24 of user core. Jan 28 01:00:02.653301 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:00:03.156120 sshd[5960]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:03.169128 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). Jan 28 01:00:03.170011 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:50838.service: Deactivated successfully. Jan 28 01:00:03.180585 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:00:03.184158 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:00:03.187294 systemd-logind[1566]: Removed session 24. Jan 28 01:00:03.262080 sshd[5974]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:03.264462 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:03.269236 containerd[1590]: time="2026-01-28T01:00:03.268665653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:00:03.277106 systemd-logind[1566]: New session 25 of user core. Jan 28 01:00:03.283176 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:00:03.989942 sshd[5974]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:04.008482 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:50860.service - OpenSSH per-connection server daemon (10.0.0.1:50860). Jan 28 01:00:04.009469 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:50852.service: Deactivated successfully. Jan 28 01:00:04.017569 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:00:04.024467 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:00:04.035524 systemd-logind[1566]: Removed session 25. Jan 28 01:00:04.098925 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 50860 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:04.117129 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:04.128221 systemd-logind[1566]: New session 26 of user core. Jan 28 01:00:04.145285 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:00:04.272004 kubelet[2760]: E0128 01:00:04.271796 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 01:00:04.501395 sshd[5996]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:04.507445 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:00:04.508986 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:50860.service: Deactivated successfully. Jan 28 01:00:04.522193 systemd[1]: Started sshd@26-10.0.0.22:22-10.0.0.1:50870.service - OpenSSH per-connection server daemon (10.0.0.1:50870). Jan 28 01:00:04.522769 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:00:04.526853 systemd-logind[1566]: Removed session 26. Jan 28 01:00:04.551630 containerd[1590]: time="2026-01-28T01:00:04.551044587Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:04.554622 containerd[1590]: time="2026-01-28T01:00:04.554406697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:00:04.555159 containerd[1590]: time="2026-01-28T01:00:04.554947130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:00:04.555834 kubelet[2760]: E0128 01:00:04.555653 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:00:04.555834 kubelet[2760]: E0128 01:00:04.555788 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:00:04.556085 kubelet[2760]: E0128 01:00:04.555960 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cgj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d58fb4688-vm2xw_calico-apiserver(29716958-c780-41f2-b2ff-5fbdb74c3998): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:04.557407 kubelet[2760]: E0128 01:00:04.557232 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 01:00:04.663074 sshd[6014]: Accepted publickey for core from 10.0.0.1 port 50870 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:04.664107 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:04.674579 systemd-logind[1566]: New session 27 of user core. Jan 28 01:00:04.679052 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:00:04.863766 sshd[6014]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:04.874522 systemd[1]: sshd@26-10.0.0.22:22-10.0.0.1:50870.service: Deactivated successfully. Jan 28 01:00:04.878976 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:00:04.880548 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:00:04.882053 systemd-logind[1566]: Removed session 27. Jan 28 01:00:06.267269 containerd[1590]: time="2026-01-28T01:00:06.267210873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:00:07.266586 kubelet[2760]: E0128 01:00:07.266474 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 01:00:07.721772 containerd[1590]: time="2026-01-28T01:00:07.721639632Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:07.725350 containerd[1590]: time="2026-01-28T01:00:07.725133890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:00:07.725350 containerd[1590]: time="2026-01-28T01:00:07.725287507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:00:07.725940 kubelet[2760]: E0128 01:00:07.725546 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:00:07.726817 kubelet[2760]: E0128 01:00:07.726336 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:00:07.727675 kubelet[2760]: E0128 01:00:07.727044 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:07.732006 containerd[1590]: time="2026-01-28T01:00:07.731473102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:00:08.640950 containerd[1590]: time="2026-01-28T01:00:08.640650916Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:08.642529 containerd[1590]: time="2026-01-28T01:00:08.642422746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:00:08.642529 containerd[1590]: time="2026-01-28T01:00:08.642489474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:00:08.642839 kubelet[2760]: E0128 01:00:08.642748 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:00:08.643500 kubelet[2760]: E0128 01:00:08.642930 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:00:08.643500 kubelet[2760]: E0128 01:00:08.643368 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g87x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jxgdl_calico-system(ca219588-36d1-44cb-b7f0-f29129c91014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:08.645213 kubelet[2760]: E0128 01:00:08.645142 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 01:00:09.266313 containerd[1590]: time="2026-01-28T01:00:09.266268603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:00:09.268577 kubelet[2760]: E0128 01:00:09.268496 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 01:00:09.736331 containerd[1590]: time="2026-01-28T01:00:09.735376898Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:09.738316 containerd[1590]: time="2026-01-28T01:00:09.737918828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:00:09.738316 containerd[1590]: time="2026-01-28T01:00:09.738032045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:00:09.738808 kubelet[2760]: E0128 01:00:09.738193 2760 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:00:09.738808 kubelet[2760]: E0128 01:00:09.738417 2760 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:00:09.738808 kubelet[2760]: E0128 01:00:09.738586 2760 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9skff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-df4fc_calico-system(b6eec03e-d69e-4a77-be85-879339debc77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:09.739938 kubelet[2760]: E0128 01:00:09.739824 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 01:00:09.873104 systemd[1]: Started sshd@27-10.0.0.22:22-10.0.0.1:50880.service - OpenSSH per-connection server daemon (10.0.0.1:50880). Jan 28 01:00:09.919361 sshd[6031]: Accepted publickey for core from 10.0.0.1 port 50880 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:09.921764 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:09.931364 systemd-logind[1566]: New session 28 of user core. Jan 28 01:00:09.941162 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:00:10.061131 sshd[6031]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:10.066985 systemd[1]: sshd@27-10.0.0.22:22-10.0.0.1:50880.service: Deactivated successfully. Jan 28 01:00:10.071500 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:00:10.071792 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:00:10.073599 systemd-logind[1566]: Removed session 28. Jan 28 01:00:14.263197 kubelet[2760]: E0128 01:00:14.263108 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:15.078048 systemd[1]: Started sshd@28-10.0.0.22:22-10.0.0.1:41146.service - OpenSSH per-connection server daemon (10.0.0.1:41146). Jan 28 01:00:15.114059 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 41146 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:15.116560 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:15.125001 systemd-logind[1566]: New session 29 of user core. Jan 28 01:00:15.136124 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:00:15.261053 sshd[6070]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:15.264225 kubelet[2760]: E0128 01:00:15.263634 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:15.266305 systemd[1]: sshd@28-10.0.0.22:22-10.0.0.1:41146.service: Deactivated successfully. Jan 28 01:00:15.276009 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:00:15.277376 systemd-logind[1566]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:00:15.279507 systemd-logind[1566]: Removed session 29. Jan 28 01:00:16.265343 kubelet[2760]: E0128 01:00:16.265229 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998" Jan 28 01:00:16.266992 kubelet[2760]: E0128 01:00:16.265865 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fd76b96dc-mbjdc" podUID="f873fe7c-2fd9-4543-9ebd-959fbca499b0" Jan 28 01:00:20.264669 kubelet[2760]: E0128 01:00:20.264392 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-df4fc" podUID="b6eec03e-d69e-4a77-be85-879339debc77" Jan 28 01:00:20.266553 kubelet[2760]: E0128 01:00:20.264906 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-4dgpt" podUID="e87ecd7f-76fb-416b-97ac-bcf8061e4f34" Jan 28 01:00:20.266553 kubelet[2760]: E0128 01:00:20.266354 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jxgdl" podUID="ca219588-36d1-44cb-b7f0-f29129c91014" Jan 28 01:00:20.279317 systemd[1]: Started sshd@29-10.0.0.22:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Jan 28 01:00:20.340300 sshd[6109]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:20.342777 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:20.349075 systemd-logind[1566]: New session 30 of user core. Jan 28 01:00:20.357457 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:00:20.499818 sshd[6109]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:20.504907 systemd[1]: sshd@29-10.0.0.22:22-10.0.0.1:41158.service: Deactivated successfully. Jan 28 01:00:20.510375 systemd-logind[1566]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:00:20.511257 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:00:20.512928 systemd-logind[1566]: Removed session 30. Jan 28 01:00:22.269780 kubelet[2760]: E0128 01:00:22.266259 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55b8fb4bd5-kz6kn" podUID="9d40997d-8269-410f-a37f-77eca7302f00" Jan 28 01:00:24.264095 kubelet[2760]: E0128 01:00:24.263942 2760 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:25.515531 systemd[1]: Started sshd@30-10.0.0.22:22-10.0.0.1:34808.service - OpenSSH per-connection server daemon (10.0.0.1:34808). Jan 28 01:00:25.562539 sshd[6125]: Accepted publickey for core from 10.0.0.1 port 34808 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:00:25.565548 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:00:25.573825 systemd-logind[1566]: New session 31 of user core. Jan 28 01:00:25.585370 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:00:25.753231 sshd[6125]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:25.758466 systemd[1]: sshd@30-10.0.0.22:22-10.0.0.1:34808.service: Deactivated successfully. Jan 28 01:00:25.762475 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:00:25.762634 systemd-logind[1566]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:00:25.765221 systemd-logind[1566]: Removed session 31. Jan 28 01:00:27.267638 kubelet[2760]: E0128 01:00:27.267579 2760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d58fb4688-vm2xw" podUID="29716958-c780-41f2-b2ff-5fbdb74c3998"