Jan 20 00:32:19.100341 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:32:19.100417 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:19.100439 kernel: BIOS-provided physical RAM map: Jan 20 00:32:19.100451 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 00:32:19.100462 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 00:32:19.100472 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 00:32:19.100485 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 00:32:19.100496 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 00:32:19.100568 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 20 00:32:19.100582 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 20 00:32:19.100599 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 20 00:32:19.100610 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 20 00:32:19.100620 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 20 00:32:19.100632 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 20 00:32:19.100645 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 20 00:32:19.100657 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 00:32:19.100673 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 20 00:32:19.100685 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 20 00:32:19.100697 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 00:32:19.100709 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:32:19.100720 kernel: NX (Execute Disable) protection: active Jan 20 00:32:19.100732 kernel: APIC: Static calls initialized Jan 20 00:32:19.100743 kernel: efi: EFI v2.7 by EDK II Jan 20 00:32:19.100755 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 20 00:32:19.100767 kernel: SMBIOS 2.8 present. Jan 20 00:32:19.100778 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 20 00:32:19.100789 kernel: Hypervisor detected: KVM Jan 20 00:32:19.100805 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:32:19.100817 kernel: kvm-clock: using sched offset of 6147705813 cycles Jan 20 00:32:19.100829 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:32:19.100841 kernel: tsc: Detected 2445.426 MHz processor Jan 20 00:32:19.100853 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:32:19.100866 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:32:19.100879 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 20 00:32:19.100891 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 00:32:19.100903 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:32:19.100919 kernel: Using GB pages for direct mapping Jan 20 00:32:19.100931 kernel: Secure boot disabled Jan 20 00:32:19.100943 kernel: ACPI: Early table checksum verification disabled Jan 20 00:32:19.100956 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 00:32:19.100974 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 00:32:19.100987 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:19.101000 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:19.101017 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 00:32:19.101030 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:19.101042 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:19.101055 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:19.101068 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:32:19.101081 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 00:32:19.101094 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 00:32:19.101111 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 00:32:19.101123 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 00:32:19.101136 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 00:32:19.101148 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 00:32:19.101161 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 00:32:19.101173 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 00:32:19.101186 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 00:32:19.101198 kernel: No NUMA configuration found Jan 20 00:32:19.101211 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 20 00:32:19.101228 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 20 00:32:19.101240 kernel: Zone ranges: Jan 20 00:32:19.101252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:32:19.101265 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 20 00:32:19.101277 kernel: Normal empty Jan 20 00:32:19.101290 kernel: Movable zone start for each node Jan 20 00:32:19.101302 kernel: Early memory node ranges Jan 20 00:32:19.101339 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 00:32:19.101352 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 00:32:19.101365 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 00:32:19.101429 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 20 00:32:19.101465 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 20 00:32:19.101500 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 20 00:32:19.101582 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 20 00:32:19.101595 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:32:19.101608 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 00:32:19.101620 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 00:32:19.101633 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:32:19.101645 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 20 00:32:19.101657 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 00:32:19.101702 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 20 00:32:19.101715 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:32:19.101728 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:32:19.101741 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:32:19.101754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:32:19.101766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:32:19.101779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:32:19.101792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:32:19.101804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:32:19.101821 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:32:19.101834 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:32:19.101847 kernel: TSC deadline timer available Jan 20 00:32:19.101859 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:32:19.101872 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:32:19.101885 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:32:19.101897 kernel: kvm-guest: setup PV sched yield Jan 20 00:32:19.101910 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 20 00:32:19.101923 kernel: Booting paravirtualized kernel on KVM Jan 20 00:32:19.101939 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:32:19.101953 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:32:19.101965 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:32:19.101978 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:32:19.101990 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:32:19.102002 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:32:19.102015 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:32:19.102029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:19.102042 kernel: random: crng init done Jan 20 00:32:19.102117 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:32:19.102133 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:32:19.102146 kernel: Fallback order for Node 0: 0 Jan 20 00:32:19.102158 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 20 00:32:19.102171 kernel: Policy zone: DMA32 Jan 20 00:32:19.102184 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:32:19.102197 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 166124K reserved, 0K cma-reserved) Jan 20 00:32:19.102210 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:32:19.102227 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:32:19.102239 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:32:19.102252 kernel: Dynamic Preempt: voluntary Jan 20 00:32:19.102265 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:32:19.102293 kernel: rcu: RCU event tracing is enabled. Jan 20 00:32:19.102310 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:32:19.102323 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:32:19.102337 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:32:19.102350 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:32:19.102363 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:32:19.102414 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:32:19.102433 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:32:19.102447 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:32:19.102460 kernel: Console: colour dummy device 80x25 Jan 20 00:32:19.102499 kernel: printk: console [ttyS0] enabled Jan 20 00:32:19.102563 kernel: ACPI: Core revision 20230628 Jan 20 00:32:19.102577 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:32:19.102595 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:32:19.102609 kernel: x2apic enabled Jan 20 00:32:19.102622 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:32:19.102635 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:32:19.102648 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:32:19.102662 kernel: kvm-guest: setup PV IPIs Jan 20 00:32:19.102675 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:32:19.102689 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:32:19.102702 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 00:32:19.102719 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:32:19.102734 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:32:19.102747 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:32:19.102761 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:32:19.102775 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:32:19.102788 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:32:19.102801 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:32:19.102815 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:32:19.102829 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:32:19.102847 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:32:19.102860 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:32:19.102873 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:32:19.102886 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:32:19.102900 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:32:19.102913 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:32:19.102926 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:32:19.102940 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:32:19.102953 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:32:19.102971 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:32:19.102984 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:32:19.102997 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:32:19.103011 kernel: landlock: Up and running. Jan 20 00:32:19.103024 kernel: SELinux: Initializing. Jan 20 00:32:19.103037 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:32:19.103051 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:32:19.103064 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:32:19.103082 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:19.103100 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:19.103113 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:32:19.103127 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:32:19.103140 kernel: signal: max sigframe size: 1776 Jan 20 00:32:19.103153 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:32:19.103167 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:32:19.103180 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:32:19.103193 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:32:19.103210 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:32:19.103224 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:32:19.103237 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:32:19.103251 kernel: smpboot: Max logical packages: 1 Jan 20 00:32:19.103264 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 00:32:19.103277 kernel: devtmpfs: initialized Jan 20 00:32:19.103290 kernel: x86/mm: Memory block size: 128MB Jan 20 00:32:19.103304 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 00:32:19.103317 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 00:32:19.103331 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 20 00:32:19.103348 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 00:32:19.103361 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 00:32:19.103411 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:32:19.103426 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:32:19.103440 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:32:19.103453 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:32:19.103466 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:32:19.103480 kernel: audit: type=2000 audit(1768869137.573:1): state=initialized audit_enabled=0 res=1 Jan 20 00:32:19.103498 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:32:19.103565 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:32:19.103579 kernel: cpuidle: using governor menu Jan 20 00:32:19.103592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:32:19.103605 kernel: dca service started, version 1.12.1 Jan 20 00:32:19.103618 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:32:19.103632 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:32:19.103645 kernel: PCI: Using configuration type 1 for base access Jan 20 00:32:19.103658 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:32:19.103677 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:32:19.103691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:32:19.103704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:32:19.103717 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:32:19.103730 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:32:19.103743 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:32:19.103757 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:32:19.103770 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:32:19.103783 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:32:19.103800 kernel: ACPI: Interpreter enabled Jan 20 00:32:19.103814 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:32:19.103827 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:32:19.103840 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:32:19.103853 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:32:19.103866 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:32:19.103880 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:32:19.104166 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:32:19.104435 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:32:19.104895 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:32:19.104918 kernel: PCI host bridge to bus 0000:00 Jan 20 00:32:19.105132 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:32:19.105330 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:32:19.105660 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:32:19.105856 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:32:19.106057 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:32:19.106252 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 20 00:32:19.106486 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:32:19.106798 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:32:19.107023 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:32:19.107233 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 20 00:32:19.107490 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 20 00:32:19.107761 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 20 00:32:19.108066 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 20 00:32:19.108278 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:32:19.108616 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:32:19.108834 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 20 00:32:19.109045 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 20 00:32:19.109261 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 20 00:32:19.109615 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:32:19.109834 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 20 00:32:19.110044 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 20 00:32:19.110252 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 20 00:32:19.110580 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:32:19.110798 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 20 00:32:19.111018 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 20 00:32:19.111228 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 20 00:32:19.111480 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 20 00:32:19.111764 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:32:19.111979 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:32:19.112202 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:32:19.112452 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 20 00:32:19.112788 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 20 00:32:19.113011 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:32:19.113225 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 20 00:32:19.113245 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:32:19.113259 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:32:19.113273 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:32:19.113286 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:32:19.113307 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:32:19.113320 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:32:19.113333 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:32:19.113347 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:32:19.113360 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:32:19.113412 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:32:19.113428 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:32:19.113441 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:32:19.113454 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:32:19.113472 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:32:19.113486 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:32:19.113499 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:32:19.113581 kernel: iommu: Default domain type: Translated Jan 20 00:32:19.113596 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:32:19.113610 kernel: efivars: Registered efivars operations Jan 20 00:32:19.113623 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:32:19.113636 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:32:19.113650 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 00:32:19.113663 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 20 00:32:19.113682 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 20 00:32:19.113696 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 20 00:32:19.113908 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:32:19.114114 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:32:19.114321 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:32:19.114340 kernel: vgaarb: loaded Jan 20 00:32:19.114354 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:32:19.114367 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:32:19.114421 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:32:19.114436 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:32:19.114449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:32:19.114463 kernel: pnp: PnP ACPI init Jan 20 00:32:19.114789 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:32:19.114811 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:32:19.114825 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:32:19.114838 kernel: NET: Registered PF_INET protocol family Jan 20 00:32:19.114857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:32:19.114871 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:32:19.114885 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:32:19.114899 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:32:19.114912 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:32:19.114926 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:32:19.114939 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:32:19.114952 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:32:19.114966 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:32:19.114984 kernel: NET: Registered PF_XDP protocol family Jan 20 00:32:19.115193 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 20 00:32:19.115442 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 20 00:32:19.115701 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:32:19.115898 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:32:19.116090 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:32:19.116281 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:32:19.116566 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:32:19.116772 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 20 00:32:19.116791 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:32:19.116805 kernel: Initialise system trusted keyrings Jan 20 00:32:19.116818 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:32:19.116831 kernel: Key type asymmetric registered Jan 20 00:32:19.116844 kernel: Asymmetric key parser 'x509' registered Jan 20 00:32:19.116857 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:32:19.116870 kernel: io scheduler mq-deadline registered Jan 20 00:32:19.116883 kernel: io scheduler kyber registered Jan 20 00:32:19.116902 kernel: io scheduler bfq registered Jan 20 00:32:19.116916 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:32:19.116930 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:32:19.116943 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:32:19.116957 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:32:19.116971 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:32:19.116984 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:32:19.116997 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:32:19.117011 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:32:19.117030 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:32:19.117250 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:32:19.117270 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:32:19.117560 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:32:19.117770 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:32:18 UTC (1768869138) Jan 20 00:32:19.118127 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:32:19.118148 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:32:19.118162 kernel: efifb: probing for efifb Jan 20 00:32:19.118182 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 20 00:32:19.118196 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 20 00:32:19.118209 kernel: efifb: scrolling: redraw Jan 20 00:32:19.118222 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 20 00:32:19.118235 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 00:32:19.118248 kernel: fb0: EFI VGA frame buffer device Jan 20 00:32:19.118262 kernel: pstore: Using crash dump compression: deflate Jan 20 00:32:19.118275 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 00:32:19.118289 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:32:19.118306 kernel: Segment Routing with IPv6 Jan 20 00:32:19.118319 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:32:19.118332 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:32:19.118345 kernel: Key type dns_resolver registered Jan 20 00:32:19.118359 kernel: IPI shorthand broadcast: enabled Jan 20 00:32:19.118437 kernel: sched_clock: Marking stable (1266017931, 364519598)->(1798946678, -168409149) Jan 20 00:32:19.118455 kernel: registered taskstats version 1 Jan 20 00:32:19.118469 kernel: Loading compiled-in X.509 certificates Jan 20 00:32:19.118483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:32:19.118502 kernel: Key type .fscrypt registered Jan 20 00:32:19.118570 kernel: Key type fscrypt-provisioning registered Jan 20 00:32:19.118584 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:32:19.118603 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:32:19.118617 kernel: ima: No architecture policies found Jan 20 00:32:19.118631 kernel: clk: Disabling unused clocks Jan 20 00:32:19.118645 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:32:19.118659 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:32:19.118673 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:32:19.118692 kernel: Run /init as init process Jan 20 00:32:19.118705 kernel: with arguments: Jan 20 00:32:19.118719 kernel: /init Jan 20 00:32:19.118733 kernel: with environment: Jan 20 00:32:19.118747 kernel: HOME=/ Jan 20 00:32:19.118760 kernel: TERM=linux Jan 20 00:32:19.118777 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:32:19.118798 systemd[1]: Detected virtualization kvm. Jan 20 00:32:19.118813 systemd[1]: Detected architecture x86-64. Jan 20 00:32:19.118827 systemd[1]: Running in initrd. Jan 20 00:32:19.118841 systemd[1]: No hostname configured, using default hostname. Jan 20 00:32:19.118855 systemd[1]: Hostname set to . Jan 20 00:32:19.118870 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:32:19.118884 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:32:19.118898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:19.118917 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:19.118933 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:32:19.118948 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:32:19.118963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:32:19.118983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:32:19.119004 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:32:19.119020 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:32:19.119034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:19.119049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:19.119064 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:32:19.119078 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:32:19.119093 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:32:19.119111 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:32:19.119126 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:32:19.119141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:32:19.119156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:32:19.119171 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:32:19.119185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:19.119200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:19.119215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:19.119230 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:32:19.119249 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:32:19.119265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:32:19.119279 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:32:19.119294 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:32:19.119309 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:32:19.119324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:32:19.119338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:19.119353 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:32:19.119371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:19.119449 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:32:19.119483 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:32:19.119505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:19.121179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:19.121196 systemd-journald[194]: Journal started Jan 20 00:32:19.121226 systemd-journald[194]: Runtime Journal (/run/log/journal/9e9e7a3690bb4235977d30c221d9acca) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:32:19.099169 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:32:19.130675 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:32:19.135939 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:32:19.137762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:19.147334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:19.158136 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:32:19.161601 kernel: Bridge firewalling registered Jan 20 00:32:19.161655 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:32:19.165720 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:32:19.193309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:32:19.198227 dracut-cmdline[219]: dracut-dracut-053 Jan 20 00:32:19.200658 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:32:19.238971 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:32:19.243790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:19.252709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:32:19.282362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:19.299791 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:19.310190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:19.361594 kernel: SCSI subsystem initialized Jan 20 00:32:19.364096 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:32:19.389272 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:32:19.442580 kernel: iscsi: registered transport (tcp) Jan 20 00:32:19.447979 systemd-resolved[307]: Positive Trust Anchors: Jan 20 00:32:19.448019 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:32:19.448059 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:32:19.481961 systemd-resolved[307]: Defaulting to hostname 'linux'. Jan 20 00:32:19.485829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:32:19.489587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:19.514083 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:32:19.514135 kernel: QLogic iSCSI HBA Driver Jan 20 00:32:19.696032 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:32:19.711798 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:32:19.743094 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:32:19.743155 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:32:19.746198 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:32:19.792627 kernel: raid6: avx2x4 gen() 30408 MB/s Jan 20 00:32:19.810687 kernel: raid6: avx2x2 gen() 28416 MB/s Jan 20 00:32:19.830233 kernel: raid6: avx2x1 gen() 23299 MB/s Jan 20 00:32:19.830302 kernel: raid6: using algorithm avx2x4 gen() 30408 MB/s Jan 20 00:32:19.850085 kernel: raid6: .... xor() 4803 MB/s, rmw enabled Jan 20 00:32:19.850149 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:32:19.871625 kernel: xor: automatically using best checksumming function avx Jan 20 00:32:20.032626 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:32:20.048116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:32:20.077801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:20.090895 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 20 00:32:20.095928 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:20.115761 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:32:20.131848 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 20 00:32:20.179030 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:32:20.199748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:32:20.276825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:20.298800 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:32:20.324609 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:32:20.326505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:32:20.338360 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:32:20.350359 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:20.362202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:32:20.376159 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:32:20.382570 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:32:20.385675 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:32:20.402878 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:32:20.402914 kernel: GPT:9289727 != 19775487 Jan 20 00:32:20.402945 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:32:20.402965 kernel: GPT:9289727 != 19775487 Jan 20 00:32:20.402982 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:32:20.405430 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:20.407791 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:32:20.731693 kernel: libata version 3.00 loaded. Jan 20 00:32:20.737748 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:32:20.738184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:32:20.743131 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:20.765980 kernel: AES CTR mode by8 optimization enabled Jan 20 00:32:20.766009 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:32:20.766294 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:32:20.766316 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:32:20.766668 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:32:20.766150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:20.783841 kernel: scsi host0: ahci Jan 20 00:32:20.789241 kernel: scsi host1: ahci Jan 20 00:32:20.790892 kernel: scsi host2: ahci Jan 20 00:32:20.771955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:20.808686 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Jan 20 00:32:20.808714 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jan 20 00:32:20.808730 kernel: scsi host3: ahci Jan 20 00:32:20.778164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:20.814862 kernel: scsi host4: ahci Jan 20 00:32:20.815119 kernel: scsi host5: ahci Jan 20 00:32:20.815355 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 20 00:32:20.789919 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:20.835738 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 20 00:32:20.835791 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 20 00:32:20.835810 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 20 00:32:20.835826 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 20 00:32:20.835841 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 20 00:32:20.838042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:20.869770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:32:20.892060 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:32:20.904478 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:32:20.912618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:32:20.927199 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:32:20.948781 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:32:20.961936 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:20.961938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:20.962184 disk-uuid[569]: Primary Header is updated. Jan 20 00:32:20.962184 disk-uuid[569]: Secondary Entries is updated. Jan 20 00:32:20.962184 disk-uuid[569]: Secondary Header is updated. Jan 20 00:32:20.962043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:20.969573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:20.991734 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:21.006174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:21.032277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:21.054803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:32:21.094005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:21.134551 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:21.134611 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:21.136613 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:32:21.141205 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:32:21.141228 kernel: ata3.00: applying bridge limits Jan 20 00:32:21.145543 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:21.149631 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:21.152630 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:32:21.152680 kernel: ata3.00: configured for UDMA/100 Jan 20 00:32:21.158672 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:32:21.203760 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:32:21.204065 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:32:21.217625 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:32:21.971581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:32:21.971747 disk-uuid[570]: The operation has completed successfully. Jan 20 00:32:22.005940 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:32:22.006106 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:32:22.027062 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:32:22.032471 sh[599]: Success Jan 20 00:32:22.051585 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:32:22.104016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:32:22.124089 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:32:22.130048 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:32:22.145634 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:32:22.145689 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:22.151998 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:32:22.152085 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:32:22.154386 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:32:22.169725 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:32:22.170689 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:32:22.186847 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:32:22.190853 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:32:22.211743 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:22.211768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:22.211779 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:22.216563 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:22.228224 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:32:22.234310 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:22.242378 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:32:22.252817 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:32:22.322463 ignition[694]: Ignition 2.19.0 Jan 20 00:32:22.322502 ignition[694]: Stage: fetch-offline Jan 20 00:32:22.322618 ignition[694]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:22.322634 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:22.322776 ignition[694]: parsed url from cmdline: "" Jan 20 00:32:22.322781 ignition[694]: no config URL provided Jan 20 00:32:22.322789 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:32:22.322803 ignition[694]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:32:22.322835 ignition[694]: op(1): [started] loading QEMU firmware config module Jan 20 00:32:22.322842 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:32:22.333075 ignition[694]: op(1): [finished] loading QEMU firmware config module Jan 20 00:32:22.361711 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:32:22.384754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:32:22.420895 systemd-networkd[789]: lo: Link UP Jan 20 00:32:22.420939 systemd-networkd[789]: lo: Gained carrier Jan 20 00:32:22.423173 systemd-networkd[789]: Enumeration completed Jan 20 00:32:22.423307 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:32:22.426147 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:22.426153 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:32:22.427972 systemd-networkd[789]: eth0: Link UP Jan 20 00:32:22.427978 systemd-networkd[789]: eth0: Gained carrier Jan 20 00:32:22.427988 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:22.431052 systemd[1]: Reached target network.target - Network. Jan 20 00:32:22.492707 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:32:22.611543 ignition[694]: parsing config with SHA512: 6adb29ca95417d6d64f354bf95145e74ce861f7a4c82f0217e7bae1342d832de04a3e7dcb5c976c4a3b7abb7d057ff4fcbbf143369f21051e4f535b7f4152714 Jan 20 00:32:22.615367 unknown[694]: fetched base config from "system" Jan 20 00:32:22.615635 unknown[694]: fetched user config from "qemu" Jan 20 00:32:22.616028 ignition[694]: fetch-offline: fetch-offline passed Jan 20 00:32:22.618704 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:32:22.616090 ignition[694]: Ignition finished successfully Jan 20 00:32:22.620103 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:32:22.646621 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:32:22.663716 ignition[793]: Ignition 2.19.0 Jan 20 00:32:22.663757 ignition[793]: Stage: kargs Jan 20 00:32:22.664008 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:22.664028 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:22.665351 ignition[793]: kargs: kargs passed Jan 20 00:32:22.665460 ignition[793]: Ignition finished successfully Jan 20 00:32:22.684382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:32:22.702888 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:32:22.734496 ignition[802]: Ignition 2.19.0 Jan 20 00:32:22.734584 ignition[802]: Stage: disks Jan 20 00:32:22.734818 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:22.734831 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:22.746615 ignition[802]: disks: disks passed Jan 20 00:32:22.746710 ignition[802]: Ignition finished successfully Jan 20 00:32:22.753502 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:32:22.761115 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:32:22.761313 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:32:22.771960 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:32:22.779345 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:32:22.785643 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:32:22.810881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:32:22.831245 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:32:22.837389 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:32:22.852697 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:32:22.968611 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:32:22.968865 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:32:22.969597 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:32:22.994763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:32:23.017319 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (820) Jan 20 00:32:23.017356 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:23.017375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:23.017433 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:22.998869 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:32:23.032958 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:23.017592 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:32:23.017641 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:32:23.017667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:32:23.026626 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:32:23.033021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:32:23.034650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:32:23.091654 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:32:23.098088 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:32:23.103758 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:32:23.110806 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:32:23.249674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:32:23.266721 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:32:23.275129 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:32:23.279291 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:32:23.288750 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:23.320561 ignition[933]: INFO : Ignition 2.19.0 Jan 20 00:32:23.320561 ignition[933]: INFO : Stage: mount Jan 20 00:32:23.320561 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:23.320561 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:23.335434 ignition[933]: INFO : mount: mount passed Jan 20 00:32:23.335434 ignition[933]: INFO : Ignition finished successfully Jan 20 00:32:23.342902 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:32:23.352775 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:32:23.365873 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:32:23.377743 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:32:23.398610 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Jan 20 00:32:23.406586 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:32:23.406624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:32:23.406636 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:32:23.416646 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:32:23.418632 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:32:23.455044 ignition[965]: INFO : Ignition 2.19.0 Jan 20 00:32:23.455044 ignition[965]: INFO : Stage: files Jan 20 00:32:23.461451 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:23.461451 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:23.461451 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:32:23.461451 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:32:23.461451 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:32:23.461451 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:32:23.461451 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:32:23.498616 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:32:23.498616 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:32:23.498616 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 00:32:23.461988 unknown[965]: wrote ssh authorized keys file for user: core Jan 20 00:32:23.552597 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:32:23.608314 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:32:23.608314 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:32:23.625151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 00:32:23.628777 systemd-networkd[789]: eth0: Gained IPv6LL Jan 20 00:32:23.905279 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 00:32:24.766842 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:32:24.766842 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 00:32:24.795711 ignition[965]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:32:24.906568 ignition[965]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:32:24.916911 ignition[965]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:32:24.916911 ignition[965]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:32:24.916911 ignition[965]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:32:24.916911 ignition[965]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:32:24.916911 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:32:24.916911 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:32:24.916911 ignition[965]: INFO : files: files passed Jan 20 00:32:24.916911 ignition[965]: INFO : Ignition finished successfully Jan 20 00:32:24.921054 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:32:24.985785 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:32:25.003218 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:32:25.013884 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:32:25.046827 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:32:25.014078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:32:25.064664 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:25.064664 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:25.050674 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:32:25.093452 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:32:25.060762 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:32:25.106045 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:32:25.188668 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:32:25.188896 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:32:25.208493 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:32:25.229781 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:32:25.240276 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:32:25.255990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:32:25.298691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:32:25.327811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:32:25.357351 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:25.362966 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:25.378828 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:32:25.386667 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:32:25.388943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:32:25.399691 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:32:25.413947 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:32:25.417789 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:32:25.426784 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:32:25.428912 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:32:25.429149 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:32:25.429304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:32:25.723203 ignition[1019]: INFO : Ignition 2.19.0 Jan 20 00:32:25.723203 ignition[1019]: INFO : Stage: umount Jan 20 00:32:25.723203 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:32:25.723203 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:32:25.723203 ignition[1019]: INFO : umount: umount passed Jan 20 00:32:25.723203 ignition[1019]: INFO : Ignition finished successfully Jan 20 00:32:25.429623 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:32:25.429788 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:32:25.429928 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:32:25.430034 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:32:25.430219 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:32:25.430684 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:25.430879 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:25.430981 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:32:25.434473 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:25.437946 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:32:25.438951 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:32:25.442216 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:32:25.446468 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:32:25.447675 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:32:25.448316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:32:25.449462 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:25.453818 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:32:25.453982 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:32:25.454145 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:32:25.454286 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:32:25.456624 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:32:25.456807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:32:25.460209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:32:25.460471 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:32:25.460783 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:32:25.460975 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:32:25.571956 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:32:25.581362 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:32:25.584217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:32:25.585927 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:25.590844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:32:25.591778 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:32:25.616472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:32:25.616710 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:32:25.622888 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:32:25.623104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:32:25.624611 systemd[1]: Stopped target network.target - Network. Jan 20 00:32:25.629646 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:32:25.629738 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:32:25.629856 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:32:25.629933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:32:25.630042 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:32:25.630116 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:32:25.630214 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:32:25.630285 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:32:25.630774 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:32:25.631716 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:32:25.656357 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:32:25.683200 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:32:25.684488 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:32:25.711569 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:32:25.711672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:25.723981 systemd-networkd[789]: eth0: DHCPv6 lease lost Jan 20 00:32:25.804846 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:32:25.805056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:32:25.818281 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:32:25.818357 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:25.859962 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:32:25.878592 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:32:25.878710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:32:25.895198 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:32:25.895309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:25.925010 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:32:25.925127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:25.929303 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:25.939376 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:32:25.939650 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:32:25.967141 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:32:25.967369 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:25.971289 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:32:25.971396 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:25.977141 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:32:25.977228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:25.987809 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:32:25.987919 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:32:25.998327 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:32:25.998458 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:32:26.008377 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:32:26.008675 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:32:26.022682 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:32:26.022798 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:32:26.052197 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:32:26.182397 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:32:26.182606 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:26.194357 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:32:26.194502 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:26.207049 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:32:26.207169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:26.218876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:32:26.218992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:26.230117 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:32:26.233197 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:32:26.240233 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:32:26.243656 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:32:26.251846 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:32:26.266741 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:32:26.278115 systemd[1]: Switching root. Jan 20 00:32:26.304587 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:32:26.304656 systemd-journald[194]: Journal stopped Jan 20 00:32:28.345469 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:32:28.345902 kernel: SELinux: policy capability open_perms=1 Jan 20 00:32:28.345928 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:32:28.345944 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:32:28.349114 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:32:28.349156 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:32:28.349182 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:32:28.349200 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:32:28.349217 kernel: audit: type=1403 audit(1768869146.518:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:32:28.349249 systemd[1]: Successfully loaded SELinux policy in 70.085ms. Jan 20 00:32:28.349283 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.366ms. Jan 20 00:32:28.349305 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:32:28.349322 systemd[1]: Detected virtualization kvm. Jan 20 00:32:28.349338 systemd[1]: Detected architecture x86-64. Jan 20 00:32:28.349349 systemd[1]: Detected first boot. Jan 20 00:32:28.349363 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:32:28.349374 zram_generator::config[1065]: No configuration found. Jan 20 00:32:28.349385 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:32:28.349396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:32:28.349407 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:32:28.349459 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:32:28.349473 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:32:28.349484 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:32:28.349499 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:32:28.349555 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:32:28.349568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:32:28.349579 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:32:28.349590 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:32:28.349601 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:32:28.349611 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:32:28.349622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:32:28.349633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:32:28.349654 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:32:28.349672 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:32:28.349692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:32:28.349710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:32:28.349730 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:32:28.349750 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:32:28.349769 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:32:28.349781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:32:28.349792 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:32:28.349806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:32:28.349817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:32:28.349828 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:32:28.349843 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:32:28.349854 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:32:28.349864 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:32:28.349875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:32:28.349886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:32:28.349899 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:32:28.349910 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:32:28.349920 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:32:28.349931 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:32:28.349942 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:32:28.349953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:28.349971 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:32:28.349991 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:32:28.350015 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:32:28.350028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:32:28.350039 systemd[1]: Reached target machines.target - Containers. Jan 20 00:32:28.350050 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:32:28.350061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:28.350073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:32:28.350090 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:32:28.350110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:28.350129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:32:28.350152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:28.350164 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:32:28.350174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:28.350185 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:32:28.350196 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:32:28.350206 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:32:28.350217 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:32:28.350227 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:32:28.350240 kernel: fuse: init (API version 7.39) Jan 20 00:32:28.350251 kernel: ACPI: bus type drm_connector registered Jan 20 00:32:28.350261 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:32:28.350271 kernel: loop: module loaded Jan 20 00:32:28.350282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:32:28.350292 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:32:28.350303 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:32:28.350315 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:32:28.350325 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:32:28.350365 systemd-journald[1149]: Collecting audit messages is disabled. Jan 20 00:32:28.350387 systemd[1]: Stopped verity-setup.service. Jan 20 00:32:28.350398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:28.350411 systemd-journald[1149]: Journal started Jan 20 00:32:28.350472 systemd-journald[1149]: Runtime Journal (/run/log/journal/9e9e7a3690bb4235977d30c221d9acca) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:32:27.631572 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:32:27.678912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:32:27.679956 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:32:27.680653 systemd[1]: systemd-journald.service: Consumed 1.959s CPU time. Jan 20 00:32:28.368474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:32:28.369940 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:32:28.374771 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:32:28.379808 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:32:28.384268 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:32:28.388629 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:32:28.392761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:32:28.396329 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:32:28.400728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:32:28.405653 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:32:28.405928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:32:28.410729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:28.411023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:28.416125 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:32:28.416407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:32:28.423115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:28.423475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:28.428032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:32:28.428343 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:32:28.432228 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:28.432613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:28.438409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:32:28.444965 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:32:28.451798 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:32:28.474575 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:32:28.490728 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:32:28.497105 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:32:28.501967 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:32:28.502136 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:32:28.508148 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:32:28.515327 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:32:28.521943 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:32:28.526213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:28.529612 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:32:28.534905 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:32:28.538925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:32:28.541596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:32:28.547498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:32:28.549599 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:32:28.553263 systemd-journald[1149]: Time spent on flushing to /var/log/journal/9e9e7a3690bb4235977d30c221d9acca is 29.480ms for 985 entries. Jan 20 00:32:28.553263 systemd-journald[1149]: System Journal (/var/log/journal/9e9e7a3690bb4235977d30c221d9acca) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:32:28.605838 systemd-journald[1149]: Received client request to flush runtime journal. Jan 20 00:32:28.555606 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:32:28.572834 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:32:28.581818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:32:28.587926 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:32:28.593006 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:32:28.598088 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:32:28.603070 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:32:28.608228 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:32:28.616725 kernel: loop0: detected capacity change from 0 to 142488 Jan 20 00:32:28.622827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:32:28.647677 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:32:28.660598 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:32:28.663624 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:32:28.668863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:32:28.674757 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 20 00:32:28.674783 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 20 00:32:28.684334 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:32:28.701826 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:32:28.707570 kernel: loop1: detected capacity change from 0 to 140768 Jan 20 00:32:28.709308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:32:28.710411 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:32:28.722790 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 00:32:28.757580 kernel: loop2: detected capacity change from 0 to 224512 Jan 20 00:32:28.761084 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:32:28.772939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:32:28.808911 kernel: loop3: detected capacity change from 0 to 142488 Jan 20 00:32:28.812117 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 20 00:32:28.812760 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 20 00:32:28.820922 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:32:28.831588 kernel: loop4: detected capacity change from 0 to 140768 Jan 20 00:32:28.853654 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 00:32:28.866804 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:32:28.868617 (sd-merge)[1206]: Merged extensions into '/usr'. Jan 20 00:32:28.875819 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:32:28.875845 systemd[1]: Reloading... Jan 20 00:32:28.965117 zram_generator::config[1233]: No configuration found. Jan 20 00:32:29.081855 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:32:29.144707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:32:29.193859 systemd[1]: Reloading finished in 317 ms. Jan 20 00:32:29.235363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:32:29.239317 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:32:29.243336 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:32:29.263887 systemd[1]: Starting ensure-sysext.service... Jan 20 00:32:29.269327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:32:29.276831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:32:29.289286 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:32:29.289301 systemd[1]: Reloading... Jan 20 00:32:29.299464 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:32:29.300258 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:32:29.301314 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:32:29.301746 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 20 00:32:29.301885 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 20 00:32:29.305625 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:32:29.305657 systemd-tmpfiles[1274]: Skipping /boot Jan 20 00:32:29.317943 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:32:29.317978 systemd-tmpfiles[1274]: Skipping /boot Jan 20 00:32:29.327155 systemd-udevd[1275]: Using default interface naming scheme 'v255'. Jan 20 00:32:29.370612 zram_generator::config[1305]: No configuration found. Jan 20 00:32:29.426573 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1339) Jan 20 00:32:29.508586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:32:29.515775 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:32:29.524585 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 00:32:29.524945 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:32:29.533083 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:32:29.533326 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:32:29.545638 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:32:29.547612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:32:29.723066 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:32:29.733822 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:32:29.736721 systemd[1]: Reloading finished in 446 ms. Jan 20 00:32:29.744633 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:32:29.772751 kernel: kvm_amd: TSC scaling supported Jan 20 00:32:29.772830 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:32:29.772882 kernel: kvm_amd: Nested Paging enabled Jan 20 00:32:29.776614 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:32:29.776406 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:32:29.777612 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:32:29.838682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:32:29.853607 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:32:29.876055 systemd[1]: Finished ensure-sysext.service. Jan 20 00:32:29.882487 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:32:29.905698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:29.916859 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:29.923604 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:32:29.928391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:32:29.929946 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:32:29.936843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:32:29.944198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:32:29.952222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:32:29.952819 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:32:29.961271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:32:29.967070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:32:29.969040 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:32:29.979406 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:32:29.989843 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:32:29.998785 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:32:30.006803 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:32:30.013610 augenrules[1401]: No rules Jan 20 00:32:30.013172 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:32:30.019730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:32:30.024132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:32:30.026279 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:30.031704 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:32:30.038300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:32:30.038681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:32:30.044357 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:32:30.044737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:32:30.052129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:32:30.052353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:32:30.053052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:32:30.053331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:32:30.055873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:32:30.056603 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:32:30.065661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:32:30.068369 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:32:30.069314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:32:30.069499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:32:30.074767 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:32:30.079737 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:32:30.080871 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:32:30.083361 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:32:30.088061 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:32:30.091901 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:32:30.108677 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:32:30.121375 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:32:30.151048 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:32:30.166046 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:32:30.230917 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:32:30.235872 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:32:30.240697 systemd-resolved[1398]: Positive Trust Anchors: Jan 20 00:32:30.240745 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:32:30.240793 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:32:30.246414 systemd-resolved[1398]: Defaulting to hostname 'linux'. Jan 20 00:32:30.247392 systemd-networkd[1395]: lo: Link UP Jan 20 00:32:30.247401 systemd-networkd[1395]: lo: Gained carrier Jan 20 00:32:30.248885 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:32:30.253699 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:32:30.258228 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:32:30.262490 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:32:30.267963 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:32:30.273381 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:32:30.278206 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:32:30.283670 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:32:30.289286 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:32:30.289343 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:32:30.293640 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:32:30.297720 systemd-networkd[1395]: Enumeration completed Jan 20 00:32:30.298740 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:32:30.304390 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:30.304851 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:32:30.305871 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:32:30.309814 systemd-networkd[1395]: eth0: Link UP Jan 20 00:32:30.309839 systemd-networkd[1395]: eth0: Gained carrier Jan 20 00:32:30.309859 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:32:30.314967 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:32:30.319119 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:32:30.323050 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:32:30.327384 systemd[1]: Reached target network.target - Network. Jan 20 00:32:30.330968 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:32:30.334978 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:32:30.338727 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:32:30.338801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:32:30.354676 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:32:30.354743 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:32:30.360854 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jan 20 00:32:31.048018 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:32:31.048034 systemd-resolved[1398]: Clock change detected. Flushing caches. Jan 20 00:32:31.048072 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:32:31.048140 systemd-timesyncd[1399]: Initial clock synchronization to Tue 2026-01-20 00:32:31.047860 UTC. Jan 20 00:32:31.055018 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:32:31.061256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:32:31.065120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:32:31.067575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:32:31.074388 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:32:31.081064 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:32:31.083588 dbus-daemon[1439]: [system] SELinux support is enabled Jan 20 00:32:31.089054 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:32:31.091629 jq[1440]: false Jan 20 00:32:31.102755 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:32:31.105767 extend-filesystems[1441]: Found loop3 Jan 20 00:32:31.105767 extend-filesystems[1441]: Found loop4 Jan 20 00:32:31.105767 extend-filesystems[1441]: Found loop5 Jan 20 00:32:31.105767 extend-filesystems[1441]: Found sr0 Jan 20 00:32:31.105767 extend-filesystems[1441]: Found vda Jan 20 00:32:31.105767 extend-filesystems[1441]: Found vda1 Jan 20 00:32:31.159884 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1344) Jan 20 00:32:31.111382 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:32:31.160263 extend-filesystems[1441]: Found vda2 Jan 20 00:32:31.160263 extend-filesystems[1441]: Found vda3 Jan 20 00:32:31.160263 extend-filesystems[1441]: Found usr Jan 20 00:32:31.160263 extend-filesystems[1441]: Found vda4 Jan 20 00:32:31.160263 extend-filesystems[1441]: Found vda6 Jan 20 00:32:31.160263 extend-filesystems[1441]: Found vda7 Jan 20 00:32:31.160263 extend-filesystems[1441]: Found vda9 Jan 20 00:32:31.160263 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 20 00:32:31.160263 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 20 00:32:31.225175 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:32:31.119979 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:32:31.226276 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:32:31.121283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:32:31.126805 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:32:31.237107 update_engine[1457]: I20260120 00:32:31.203845 1457 main.cc:92] Flatcar Update Engine starting Jan 20 00:32:31.237107 update_engine[1457]: I20260120 00:32:31.207340 1457 update_check_scheduler.cc:74] Next update check in 11m1s Jan 20 00:32:31.137016 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:32:31.237717 jq[1460]: true Jan 20 00:32:31.153766 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:32:31.194881 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:32:31.195147 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:32:31.195718 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:32:31.195969 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:32:31.224264 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:32:31.225612 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:32:31.239999 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:32:31.240032 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:32:31.242928 systemd-logind[1452]: New seat seat0. Jan 20 00:32:31.250232 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:32:31.268592 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:32:31.321105 tar[1466]: linux-amd64/LICENSE Jan 20 00:32:31.321105 tar[1466]: linux-amd64/helm Jan 20 00:32:31.274302 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:32:31.279873 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 00:32:31.451170 jq[1467]: true Jan 20 00:32:31.451566 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:32:31.451566 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:32:31.451566 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:32:31.477027 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:32:31.285860 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:32:31.483023 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 20 00:32:31.295185 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:32:31.295325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:32:31.575004 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:32:31.300611 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:32:31.300756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:32:31.310129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:32:31.451071 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:32:31.451396 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:32:31.471130 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:32:31.499991 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:32:31.588742 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:32:31.592993 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:32:31.598770 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:32:31.600951 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:32:31.601398 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:32:31.733365 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:32:31.787642 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:32:31.802292 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:32:31.807943 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:32:31.812357 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:32:32.406894 containerd[1468]: time="2026-01-20T00:32:32.406771990Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:32:32.426695 tar[1466]: linux-amd64/README.md Jan 20 00:32:32.442945 systemd-networkd[1395]: eth0: Gained IPv6LL Jan 20 00:32:32.459537 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:32:32.459917 containerd[1468]: time="2026-01-20T00:32:32.459564201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.465215 containerd[1468]: time="2026-01-20T00:32:32.464868308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:32.465215 containerd[1468]: time="2026-01-20T00:32:32.465199446Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:32:32.465319 containerd[1468]: time="2026-01-20T00:32:32.465300274Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:32:32.465778 containerd[1468]: time="2026-01-20T00:32:32.465726230Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:32:32.465896 containerd[1468]: time="2026-01-20T00:32:32.465863416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466212 containerd[1468]: time="2026-01-20T00:32:32.465989531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466212 containerd[1468]: time="2026-01-20T00:32:32.466018355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466396 containerd[1468]: time="2026-01-20T00:32:32.466329646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466396 containerd[1468]: time="2026-01-20T00:32:32.466389138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466473 containerd[1468]: time="2026-01-20T00:32:32.466455742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466535 containerd[1468]: time="2026-01-20T00:32:32.466472713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.466695 containerd[1468]: time="2026-01-20T00:32:32.466660053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.467052 containerd[1468]: time="2026-01-20T00:32:32.466986252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:32:32.467826 containerd[1468]: time="2026-01-20T00:32:32.467234285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:32:32.467826 containerd[1468]: time="2026-01-20T00:32:32.467263860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:32:32.467826 containerd[1468]: time="2026-01-20T00:32:32.467468963Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:32:32.467826 containerd[1468]: time="2026-01-20T00:32:32.467633470Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:32:32.467898 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:32:32.475724 containerd[1468]: time="2026-01-20T00:32:32.475648689Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:32:32.475789 containerd[1468]: time="2026-01-20T00:32:32.475740010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:32:32.476001 containerd[1468]: time="2026-01-20T00:32:32.475888367Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:32:32.476001 containerd[1468]: time="2026-01-20T00:32:32.475952977Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:32:32.476088 containerd[1468]: time="2026-01-20T00:32:32.476033037Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:32:32.476262 containerd[1468]: time="2026-01-20T00:32:32.476173409Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:32:32.477649 containerd[1468]: time="2026-01-20T00:32:32.477266900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:32:32.478933 containerd[1468]: time="2026-01-20T00:32:32.478871987Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:32:32.478933 containerd[1468]: time="2026-01-20T00:32:32.478912672Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:32:32.478933 containerd[1468]: time="2026-01-20T00:32:32.478926378Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.478938742Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.478951415Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.478962526Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.478974498Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.478986961Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.478998523Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.479009393Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479032 containerd[1468]: time="2026-01-20T00:32:32.479019572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479062332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479075868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479086968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479097709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479107888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479119409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479129557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479140047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479150497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479162819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479176586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479186744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479196923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479209477Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:32:32.479257 containerd[1468]: time="2026-01-20T00:32:32.479231839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479242438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479252006Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479334490Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479352243Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479362653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479377210Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479385796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479559360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479629110Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:32:32.479876 containerd[1468]: time="2026-01-20T00:32:32.479653917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:32:32.480252 containerd[1468]: time="2026-01-20T00:32:32.480101692Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:32:32.480252 containerd[1468]: time="2026-01-20T00:32:32.480224802Z" level=info msg="Connect containerd service" Jan 20 00:32:32.480252 containerd[1468]: time="2026-01-20T00:32:32.480272130Z" level=info msg="using legacy CRI server" Jan 20 00:32:32.480252 containerd[1468]: time="2026-01-20T00:32:32.480281438Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:32:32.481211 containerd[1468]: time="2026-01-20T00:32:32.480462326Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:32:32.481600 containerd[1468]: time="2026-01-20T00:32:32.481385510Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.481857857Z" level=info msg="Start subscribing containerd event" Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.481968713Z" level=info msg="Start recovering state" Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.481932300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.482121142Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.482131608Z" level=info msg="Start event monitor" Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.482368841Z" level=info msg="Start snapshots syncer" Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.482387966Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:32:32.482597 containerd[1468]: time="2026-01-20T00:32:32.482399999Z" level=info msg="Start streaming server" Jan 20 00:32:32.482344 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:32:32.483259 containerd[1468]: time="2026-01-20T00:32:32.483236811Z" level=info msg="containerd successfully booted in 0.078588s" Jan 20 00:32:32.489930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:32.496811 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:32:32.502206 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:32:32.509154 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:32:32.527778 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:32:32.543873 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:32:32.544276 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:32:32.549799 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:32:33.663013 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:32:33.678929 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:49110.service - OpenSSH per-connection server daemon (10.0.0.1:49110). Jan 20 00:32:33.843722 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 49110 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:33.847701 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:33.864742 systemd-logind[1452]: New session 1 of user core. Jan 20 00:32:33.866044 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:32:33.908196 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:32:34.033580 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:32:34.071063 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:32:34.081253 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:32:34.514994 systemd[1551]: Queued start job for default target default.target. Jan 20 00:32:34.603142 systemd[1551]: Created slice app.slice - User Application Slice. Jan 20 00:32:34.603217 systemd[1551]: Reached target paths.target - Paths. Jan 20 00:32:34.603244 systemd[1551]: Reached target timers.target - Timers. Jan 20 00:32:34.606659 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:32:34.631553 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:32:34.631810 systemd[1551]: Reached target sockets.target - Sockets. Jan 20 00:32:34.631870 systemd[1551]: Reached target basic.target - Basic System. Jan 20 00:32:34.631940 systemd[1551]: Reached target default.target - Main User Target. Jan 20 00:32:34.632029 systemd[1551]: Startup finished in 534ms. Jan 20 00:32:34.632159 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:32:34.637895 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:32:34.715659 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:49124.service - OpenSSH per-connection server daemon (10.0.0.1:49124). Jan 20 00:32:34.851824 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 49124 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:34.857328 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:34.868232 systemd-logind[1452]: New session 2 of user core. Jan 20 00:32:34.876739 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:32:34.940167 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:34.951305 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:49124.service: Deactivated successfully. Jan 20 00:32:34.953386 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:32:34.955717 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:32:34.958899 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:49130.service - OpenSSH per-connection server daemon (10.0.0.1:49130). Jan 20 00:32:34.963940 systemd-logind[1452]: Removed session 2. Jan 20 00:32:35.011961 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 49130 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:35.013766 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:35.019465 systemd-logind[1452]: New session 3 of user core. Jan 20 00:32:35.033834 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:32:35.097699 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:35.102671 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:49130.service: Deactivated successfully. Jan 20 00:32:35.105203 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:32:35.106546 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:32:35.108389 systemd-logind[1452]: Removed session 3. Jan 20 00:32:35.158934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:35.163885 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:32:35.164601 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:35.168775 systemd[1]: Startup finished in 1.415s (kernel) + 7.734s (initrd) + 8.032s (userspace) = 17.182s. Jan 20 00:32:35.614067 kubelet[1581]: E0120 00:32:35.613945 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:35.619015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:35.619237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:35.619782 systemd[1]: kubelet.service: Consumed 2.764s CPU time. Jan 20 00:32:45.109609 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:43340.service - OpenSSH per-connection server daemon (10.0.0.1:43340). Jan 20 00:32:45.149882 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 43340 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:45.152068 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:45.157861 systemd-logind[1452]: New session 4 of user core. Jan 20 00:32:45.175688 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:32:45.233294 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:45.244606 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:43340.service: Deactivated successfully. Jan 20 00:32:45.246039 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:32:45.247674 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:32:45.248939 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:43352.service - OpenSSH per-connection server daemon (10.0.0.1:43352). Jan 20 00:32:45.250194 systemd-logind[1452]: Removed session 4. Jan 20 00:32:45.310854 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 43352 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:45.312298 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:45.317366 systemd-logind[1452]: New session 5 of user core. Jan 20 00:32:45.326723 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:32:45.378837 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:45.390690 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:43352.service: Deactivated successfully. Jan 20 00:32:45.392225 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:32:45.393794 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:32:45.395893 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:43358.service - OpenSSH per-connection server daemon (10.0.0.1:43358). Jan 20 00:32:45.397205 systemd-logind[1452]: Removed session 5. Jan 20 00:32:45.430592 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 43358 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:45.431937 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:45.437051 systemd-logind[1452]: New session 6 of user core. Jan 20 00:32:45.446649 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:32:45.504682 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:45.521028 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:43358.service: Deactivated successfully. Jan 20 00:32:45.523124 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:32:45.524811 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:32:45.534002 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:43362.service - OpenSSH per-connection server daemon (10.0.0.1:43362). Jan 20 00:32:45.535070 systemd-logind[1452]: Removed session 6. Jan 20 00:32:45.573264 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 43362 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:45.575895 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:45.581782 systemd-logind[1452]: New session 7 of user core. Jan 20 00:32:45.599774 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:32:45.664139 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:32:45.664828 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:45.666112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:32:45.673872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:45.689738 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 20 00:32:45.692858 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:45.697695 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:43362.service: Deactivated successfully. Jan 20 00:32:45.699332 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:32:45.701380 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:32:45.713946 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:43366.service - OpenSSH per-connection server daemon (10.0.0.1:43366). Jan 20 00:32:45.715630 systemd-logind[1452]: Removed session 7. Jan 20 00:32:45.749897 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 43366 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:45.752220 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:45.758689 systemd-logind[1452]: New session 8 of user core. Jan 20 00:32:45.772741 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:32:45.834732 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:32:45.835317 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:45.841110 sudo[1630]: pam_unix(sudo:session): session closed for user root Jan 20 00:32:45.850803 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:32:45.851303 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:45.870900 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:45.873084 auditctl[1633]: No rules Jan 20 00:32:45.873722 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:32:45.874068 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:45.878166 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:32:45.920787 augenrules[1651]: No rules Jan 20 00:32:45.922609 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:32:45.923902 sudo[1629]: pam_unix(sudo:session): session closed for user root Jan 20 00:32:45.925947 sshd[1626]: pam_unix(sshd:session): session closed for user core Jan 20 00:32:45.943356 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:43366.service: Deactivated successfully. Jan 20 00:32:45.944985 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:32:45.946977 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:32:45.948420 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:43382.service - OpenSSH per-connection server daemon (10.0.0.1:43382). Jan 20 00:32:45.950265 systemd-logind[1452]: Removed session 8. Jan 20 00:32:46.013406 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 43382 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:32:46.016329 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:32:46.023449 systemd-logind[1452]: New session 9 of user core. Jan 20 00:32:46.031777 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:32:46.089795 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:32:46.090166 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:32:46.185030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:46.190823 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:46.248313 kubelet[1677]: E0120 00:32:46.248214 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:46.254724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:46.254919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:46.395040 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:32:46.395118 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:32:46.682614 dockerd[1694]: time="2026-01-20T00:32:46.682359032Z" level=info msg="Starting up" Jan 20 00:32:46.829637 systemd[1]: var-lib-docker-metacopy\x2dcheck1490013252-merged.mount: Deactivated successfully. Jan 20 00:32:46.871755 dockerd[1694]: time="2026-01-20T00:32:46.871628872Z" level=info msg="Loading containers: start." Jan 20 00:32:47.040589 kernel: Initializing XFRM netlink socket Jan 20 00:32:47.160869 systemd-networkd[1395]: docker0: Link UP Jan 20 00:32:47.197368 dockerd[1694]: time="2026-01-20T00:32:47.197271554Z" level=info msg="Loading containers: done." Jan 20 00:32:47.220390 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1406386888-merged.mount: Deactivated successfully. Jan 20 00:32:47.223703 dockerd[1694]: time="2026-01-20T00:32:47.223619444Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:32:47.223900 dockerd[1694]: time="2026-01-20T00:32:47.223832332Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:32:47.224037 dockerd[1694]: time="2026-01-20T00:32:47.223984426Z" level=info msg="Daemon has completed initialization" Jan 20 00:32:47.293568 dockerd[1694]: time="2026-01-20T00:32:47.293383683Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:32:47.293708 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:32:48.099745 containerd[1468]: time="2026-01-20T00:32:48.099605299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 00:32:48.600322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353791225.mount: Deactivated successfully. Jan 20 00:32:50.730333 containerd[1468]: time="2026-01-20T00:32:50.730205956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:50.731457 containerd[1468]: time="2026-01-20T00:32:50.731285411Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 00:32:50.732633 containerd[1468]: time="2026-01-20T00:32:50.732561884Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:50.736472 containerd[1468]: time="2026-01-20T00:32:50.736305874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:50.738155 containerd[1468]: time="2026-01-20T00:32:50.738035752Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.638355964s" Jan 20 00:32:50.738155 containerd[1468]: time="2026-01-20T00:32:50.738123796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 00:32:50.739856 containerd[1468]: time="2026-01-20T00:32:50.739813360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 00:32:53.832711 containerd[1468]: time="2026-01-20T00:32:53.832468518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:53.835169 containerd[1468]: time="2026-01-20T00:32:53.834793000Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 00:32:53.839633 containerd[1468]: time="2026-01-20T00:32:53.839284814Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:53.869967 containerd[1468]: time="2026-01-20T00:32:53.869735208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:53.872011 containerd[1468]: time="2026-01-20T00:32:53.871890311Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.13201197s" Jan 20 00:32:53.872011 containerd[1468]: time="2026-01-20T00:32:53.871991210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 00:32:53.873686 containerd[1468]: time="2026-01-20T00:32:53.873217214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 00:32:56.506827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:32:56.527654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:32:56.864333 containerd[1468]: time="2026-01-20T00:32:56.863731160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:56.866671 containerd[1468]: time="2026-01-20T00:32:56.866334340Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 00:32:56.870282 containerd[1468]: time="2026-01-20T00:32:56.870224484Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:56.874692 containerd[1468]: time="2026-01-20T00:32:56.874631132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:32:56.876941 containerd[1468]: time="2026-01-20T00:32:56.876652415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.003389416s" Jan 20 00:32:56.876941 containerd[1468]: time="2026-01-20T00:32:56.876691188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 00:32:56.878170 containerd[1468]: time="2026-01-20T00:32:56.878130624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 00:32:57.216264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:32:57.230331 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:32:57.494465 kubelet[1916]: E0120 00:32:57.494244 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:32:57.500884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:32:57.501257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:32:57.501822 systemd[1]: kubelet.service: Consumed 1.004s CPU time. Jan 20 00:32:59.137624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811796528.mount: Deactivated successfully. Jan 20 00:33:00.404930 containerd[1468]: time="2026-01-20T00:33:00.404758203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:00.405996 containerd[1468]: time="2026-01-20T00:33:00.405906920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 00:33:00.407399 containerd[1468]: time="2026-01-20T00:33:00.407289178Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:00.410011 containerd[1468]: time="2026-01-20T00:33:00.409960077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:00.411272 containerd[1468]: time="2026-01-20T00:33:00.410272927Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.53211382s" Jan 20 00:33:00.411272 containerd[1468]: time="2026-01-20T00:33:00.410305348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 00:33:00.411732 containerd[1468]: time="2026-01-20T00:33:00.411405776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 00:33:01.432702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274209580.mount: Deactivated successfully. Jan 20 00:33:05.223850 containerd[1468]: time="2026-01-20T00:33:05.223712312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:05.225888 containerd[1468]: time="2026-01-20T00:33:05.225744156Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 00:33:05.227266 containerd[1468]: time="2026-01-20T00:33:05.227187718Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:05.233146 containerd[1468]: time="2026-01-20T00:33:05.233073977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:05.234853 containerd[1468]: time="2026-01-20T00:33:05.234775932Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.823338909s" Jan 20 00:33:05.234853 containerd[1468]: time="2026-01-20T00:33:05.234825884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 00:33:05.236218 containerd[1468]: time="2026-01-20T00:33:05.236115669Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:33:05.947829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410974818.mount: Deactivated successfully. Jan 20 00:33:05.999852 containerd[1468]: time="2026-01-20T00:33:05.999665716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:06.001211 containerd[1468]: time="2026-01-20T00:33:06.001103618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:33:06.003430 containerd[1468]: time="2026-01-20T00:33:06.003374132Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:06.006871 containerd[1468]: time="2026-01-20T00:33:06.006800998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:06.008747 containerd[1468]: time="2026-01-20T00:33:06.008459450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 772.191097ms" Jan 20 00:33:06.008747 containerd[1468]: time="2026-01-20T00:33:06.008708770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:33:06.010070 containerd[1468]: time="2026-01-20T00:33:06.010013718Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 00:33:06.696221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402463446.mount: Deactivated successfully. Jan 20 00:33:07.752986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 00:33:07.774880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:08.633959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:08.669172 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:33:08.785416 kubelet[2046]: E0120 00:33:08.785255 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:33:08.789723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:33:08.789941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:33:12.511735 containerd[1468]: time="2026-01-20T00:33:12.511608450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:12.514040 containerd[1468]: time="2026-01-20T00:33:12.513837847Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 00:33:12.515755 containerd[1468]: time="2026-01-20T00:33:12.515722129Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:12.521546 containerd[1468]: time="2026-01-20T00:33:12.521288054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:12.523161 containerd[1468]: time="2026-01-20T00:33:12.523078785Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.513027718s" Jan 20 00:33:12.523161 containerd[1468]: time="2026-01-20T00:33:12.523137996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 00:33:15.340945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:15.351987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:15.382732 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-9.scope)... Jan 20 00:33:15.382779 systemd[1]: Reloading... Jan 20 00:33:15.475596 zram_generator::config[2130]: No configuration found. Jan 20 00:33:15.597087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:33:15.679346 systemd[1]: Reloading finished in 296 ms. Jan 20 00:33:15.750951 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:33:15.751129 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:33:15.751845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:15.755909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:16.016909 update_engine[1457]: I20260120 00:33:16.016786 1457 update_attempter.cc:509] Updating boot flags... Jan 20 00:33:16.047688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:16.075184 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:16.151593 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2194) Jan 20 00:33:16.206555 kubelet[2182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:16.206555 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:33:16.206555 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:16.206555 kubelet[2182]: I0120 00:33:16.206184 2182 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:33:16.542607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2195) Jan 20 00:33:17.465799 kubelet[2182]: I0120 00:33:17.465700 2182 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:33:17.465799 kubelet[2182]: I0120 00:33:17.465771 2182 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:33:17.466649 kubelet[2182]: I0120 00:33:17.466204 2182 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:33:17.502294 kubelet[2182]: E0120 00:33:17.502152 2182 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:17.504249 kubelet[2182]: I0120 00:33:17.504167 2182 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:33:17.514812 kubelet[2182]: E0120 00:33:17.514744 2182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:33:17.514812 kubelet[2182]: I0120 00:33:17.514798 2182 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:33:17.520938 kubelet[2182]: I0120 00:33:17.520854 2182 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:33:17.588318 kubelet[2182]: I0120 00:33:17.587858 2182 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:33:17.589138 kubelet[2182]: I0120 00:33:17.588238 2182 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:33:17.589708 kubelet[2182]: I0120 00:33:17.589236 2182 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:33:17.589708 kubelet[2182]: I0120 00:33:17.589298 2182 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:33:17.590284 kubelet[2182]: I0120 00:33:17.590020 2182 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:17.598836 kubelet[2182]: I0120 00:33:17.598607 2182 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:33:17.599109 kubelet[2182]: I0120 00:33:17.598983 2182 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:33:17.599287 kubelet[2182]: I0120 00:33:17.599145 2182 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:33:17.599287 kubelet[2182]: I0120 00:33:17.599228 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:33:17.603811 kubelet[2182]: W0120 00:33:17.603694 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:17.603811 kubelet[2182]: E0120 00:33:17.603794 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:17.606037 kubelet[2182]: W0120 00:33:17.605883 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:17.606134 kubelet[2182]: E0120 00:33:17.606056 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:17.608277 kubelet[2182]: I0120 00:33:17.608191 2182 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:33:17.608884 kubelet[2182]: I0120 00:33:17.608836 2182 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:33:17.610345 kubelet[2182]: W0120 00:33:17.610284 2182 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:33:17.614982 kubelet[2182]: I0120 00:33:17.614858 2182 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:33:17.615358 kubelet[2182]: I0120 00:33:17.615230 2182 server.go:1287] "Started kubelet" Jan 20 00:33:17.616797 kubelet[2182]: I0120 00:33:17.616163 2182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:33:17.616797 kubelet[2182]: I0120 00:33:17.616577 2182 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:33:17.619243 kubelet[2182]: I0120 00:33:17.617100 2182 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:33:17.619243 kubelet[2182]: I0120 00:33:17.617677 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:33:17.619243 kubelet[2182]: I0120 00:33:17.618567 2182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:33:17.620673 kubelet[2182]: I0120 00:33:17.620293 2182 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:33:17.640325 kubelet[2182]: I0120 00:33:17.640198 2182 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:33:17.720926 kubelet[2182]: I0120 00:33:17.720641 2182 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:33:17.720926 kubelet[2182]: E0120 00:33:17.720663 2182 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:17.721218 kubelet[2182]: E0120 00:33:17.625774 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4929d1fac8d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:33:17.615122642 +0000 UTC m=+1.517510963,LastTimestamp:2026-01-20 00:33:17.615122642 +0000 UTC m=+1.517510963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:33:17.721373 kubelet[2182]: E0120 00:33:17.721250 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Jan 20 00:33:17.722043 kubelet[2182]: W0120 00:33:17.721852 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:17.722043 kubelet[2182]: E0120 00:33:17.722004 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:17.722821 kubelet[2182]: I0120 00:33:17.722646 2182 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:33:17.726892 kubelet[2182]: I0120 00:33:17.726776 2182 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:33:17.726971 kubelet[2182]: I0120 00:33:17.726899 2182 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:33:17.730551 kubelet[2182]: E0120 00:33:17.728357 2182 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:33:17.730551 kubelet[2182]: I0120 00:33:17.728600 2182 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:33:17.748751 kubelet[2182]: I0120 00:33:17.748627 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:33:17.750776 kubelet[2182]: I0120 00:33:17.750710 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:33:17.750776 kubelet[2182]: I0120 00:33:17.750762 2182 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:33:17.750776 kubelet[2182]: I0120 00:33:17.750784 2182 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:33:17.750960 kubelet[2182]: I0120 00:33:17.750791 2182 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:33:17.750960 kubelet[2182]: E0120 00:33:17.750871 2182 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:33:17.752101 kubelet[2182]: W0120 00:33:17.751895 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:17.752101 kubelet[2182]: E0120 00:33:17.752042 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:17.774706 kubelet[2182]: I0120 00:33:17.774665 2182 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:33:17.774706 kubelet[2182]: I0120 00:33:17.774697 2182 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:33:17.774706 kubelet[2182]: I0120 00:33:17.774713 2182 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:17.777916 kubelet[2182]: I0120 00:33:17.777832 2182 policy_none.go:49] "None policy: Start" Jan 20 00:33:17.777916 kubelet[2182]: I0120 00:33:17.777891 2182 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:33:17.777916 kubelet[2182]: I0120 00:33:17.777916 2182 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:33:17.787113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:33:17.816307 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:33:17.820997 kubelet[2182]: E0120 00:33:17.820934 2182 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:33:17.821468 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:33:17.835341 kubelet[2182]: I0120 00:33:17.835282 2182 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:33:17.836045 kubelet[2182]: I0120 00:33:17.835784 2182 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:33:17.836045 kubelet[2182]: I0120 00:33:17.835807 2182 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:33:17.836161 kubelet[2182]: I0120 00:33:17.836139 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:33:17.837268 kubelet[2182]: E0120 00:33:17.837222 2182 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:33:17.837383 kubelet[2182]: E0120 00:33:17.837271 2182 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:33:17.867105 systemd[1]: Created slice kubepods-burstable-pod9e62de29e52e9d9fbf38a43951c67920.slice - libcontainer container kubepods-burstable-pod9e62de29e52e9d9fbf38a43951c67920.slice. Jan 20 00:33:17.877747 kubelet[2182]: E0120 00:33:17.877653 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:17.879951 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 00:33:17.891380 kubelet[2182]: E0120 00:33:17.891313 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:17.894959 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 00:33:17.897344 kubelet[2182]: E0120 00:33:17.897290 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:17.922361 kubelet[2182]: E0120 00:33:17.922198 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Jan 20 00:33:17.923715 kubelet[2182]: I0120 00:33:17.923626 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:17.923715 kubelet[2182]: I0120 00:33:17.923684 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:17.923715 kubelet[2182]: I0120 00:33:17.923702 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:17.923715 kubelet[2182]: I0120 00:33:17.923717 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:17.923839 kubelet[2182]: I0120 00:33:17.923730 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:17.923839 kubelet[2182]: I0120 00:33:17.923743 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:17.923839 kubelet[2182]: I0120 00:33:17.923757 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:17.923839 kubelet[2182]: I0120 00:33:17.923770 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:17.923839 kubelet[2182]: I0120 00:33:17.923783 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:17.939253 kubelet[2182]: I0120 00:33:17.939081 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:17.939677 kubelet[2182]: E0120 00:33:17.939622 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 20 00:33:18.142299 kubelet[2182]: I0120 00:33:18.142145 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:18.143040 kubelet[2182]: E0120 00:33:18.142871 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 20 00:33:18.179459 kubelet[2182]: E0120 00:33:18.179289 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:18.180597 containerd[1468]: time="2026-01-20T00:33:18.180466309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e62de29e52e9d9fbf38a43951c67920,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:18.191992 kubelet[2182]: E0120 00:33:18.191942 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:18.192706 containerd[1468]: time="2026-01-20T00:33:18.192640079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:18.198386 kubelet[2182]: E0120 00:33:18.198333 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:18.199169 containerd[1468]: time="2026-01-20T00:33:18.199100864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:18.323622 kubelet[2182]: E0120 00:33:18.323352 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Jan 20 00:33:18.546445 kubelet[2182]: I0120 00:33:18.546246 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:18.547191 kubelet[2182]: E0120 00:33:18.546814 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 20 00:33:18.562349 kubelet[2182]: W0120 00:33:18.562115 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:18.562601 kubelet[2182]: E0120 00:33:18.562379 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:18.628399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719082671.mount: Deactivated successfully. Jan 20 00:33:18.639685 containerd[1468]: time="2026-01-20T00:33:18.639400325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:18.644094 containerd[1468]: time="2026-01-20T00:33:18.643854084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:33:18.645683 containerd[1468]: time="2026-01-20T00:33:18.645352115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:18.647730 containerd[1468]: time="2026-01-20T00:33:18.647668278Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:18.649627 containerd[1468]: time="2026-01-20T00:33:18.649344862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:33:18.651210 containerd[1468]: time="2026-01-20T00:33:18.651101376Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:18.652764 containerd[1468]: time="2026-01-20T00:33:18.652696183Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:33:18.656112 containerd[1468]: time="2026-01-20T00:33:18.656023084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:33:18.669029 containerd[1468]: time="2026-01-20T00:33:18.668839528Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.109069ms" Jan 20 00:33:18.672941 containerd[1468]: time="2026-01-20T00:33:18.672869848Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.130314ms" Jan 20 00:33:18.674749 containerd[1468]: time="2026-01-20T00:33:18.674607787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.385155ms" Jan 20 00:33:18.932846 kernel: hrtimer: interrupt took 6070723 ns Jan 20 00:33:19.040788 kubelet[2182]: W0120 00:33:19.040552 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:19.041946 kubelet[2182]: E0120 00:33:19.041692 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:19.152016 kubelet[2182]: E0120 00:33:19.151644 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Jan 20 00:33:19.152016 kubelet[2182]: W0120 00:33:19.151641 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:19.152016 kubelet[2182]: E0120 00:33:19.151780 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:19.197038 kubelet[2182]: W0120 00:33:19.196947 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 20 00:33:19.197038 kubelet[2182]: E0120 00:33:19.197029 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:19.370455 kubelet[2182]: I0120 00:33:19.370078 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:19.372827 kubelet[2182]: E0120 00:33:19.371040 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 20 00:33:19.385201 containerd[1468]: time="2026-01-20T00:33:19.384777820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:19.385201 containerd[1468]: time="2026-01-20T00:33:19.384859673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:19.385201 containerd[1468]: time="2026-01-20T00:33:19.384873457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:19.385201 containerd[1468]: time="2026-01-20T00:33:19.384947726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:19.392993 containerd[1468]: time="2026-01-20T00:33:19.392594513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:19.392993 containerd[1468]: time="2026-01-20T00:33:19.392646589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:19.392993 containerd[1468]: time="2026-01-20T00:33:19.392657760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:19.392993 containerd[1468]: time="2026-01-20T00:33:19.392769266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:19.394706 containerd[1468]: time="2026-01-20T00:33:19.394225093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:19.394706 containerd[1468]: time="2026-01-20T00:33:19.394260931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:19.394706 containerd[1468]: time="2026-01-20T00:33:19.394270688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:19.394706 containerd[1468]: time="2026-01-20T00:33:19.394330689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:19.443195 systemd[1]: Started cri-containerd-0e1f57ec68776d653df090a62bf00d05927e8c15e189eb726c86e8b51a66b01c.scope - libcontainer container 0e1f57ec68776d653df090a62bf00d05927e8c15e189eb726c86e8b51a66b01c. Jan 20 00:33:19.455843 systemd[1]: Started cri-containerd-074bc895e70caff5976bb38a9d04876bb493e8961317d51cb2a2f0d19d74759b.scope - libcontainer container 074bc895e70caff5976bb38a9d04876bb493e8961317d51cb2a2f0d19d74759b. Jan 20 00:33:19.464781 systemd[1]: Started cri-containerd-6d9ead44521d91500fa236a9307be2d23a55364b1e7004f973b1f3db1e1e6320.scope - libcontainer container 6d9ead44521d91500fa236a9307be2d23a55364b1e7004f973b1f3db1e1e6320. Jan 20 00:33:19.570439 kubelet[2182]: E0120 00:33:19.568809 2182 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:33:19.583248 containerd[1468]: time="2026-01-20T00:33:19.583009012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e1f57ec68776d653df090a62bf00d05927e8c15e189eb726c86e8b51a66b01c\"" Jan 20 00:33:19.586278 kubelet[2182]: E0120 00:33:19.584900 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:19.590314 containerd[1468]: time="2026-01-20T00:33:19.590286988Z" level=info msg="CreateContainer within sandbox \"0e1f57ec68776d653df090a62bf00d05927e8c15e189eb726c86e8b51a66b01c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:33:19.590768 containerd[1468]: time="2026-01-20T00:33:19.590704078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"074bc895e70caff5976bb38a9d04876bb493e8961317d51cb2a2f0d19d74759b\"" Jan 20 00:33:19.592066 kubelet[2182]: E0120 00:33:19.591931 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:19.594836 containerd[1468]: time="2026-01-20T00:33:19.594800608Z" level=info msg="CreateContainer within sandbox \"074bc895e70caff5976bb38a9d04876bb493e8961317d51cb2a2f0d19d74759b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:33:19.614256 containerd[1468]: time="2026-01-20T00:33:19.614201278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e62de29e52e9d9fbf38a43951c67920,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d9ead44521d91500fa236a9307be2d23a55364b1e7004f973b1f3db1e1e6320\"" Jan 20 00:33:19.615706 kubelet[2182]: E0120 00:33:19.615648 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:19.619585 containerd[1468]: time="2026-01-20T00:33:19.619366911Z" level=info msg="CreateContainer within sandbox \"6d9ead44521d91500fa236a9307be2d23a55364b1e7004f973b1f3db1e1e6320\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:33:19.626134 containerd[1468]: time="2026-01-20T00:33:19.625807282Z" level=info msg="CreateContainer within sandbox \"0e1f57ec68776d653df090a62bf00d05927e8c15e189eb726c86e8b51a66b01c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6783fa5d678faec7c8df206ebf5bf65904bf33d1bccc4819460023a39bb5a709\"" Jan 20 00:33:19.628086 containerd[1468]: time="2026-01-20T00:33:19.627876500Z" level=info msg="StartContainer for \"6783fa5d678faec7c8df206ebf5bf65904bf33d1bccc4819460023a39bb5a709\"" Jan 20 00:33:19.638173 containerd[1468]: time="2026-01-20T00:33:19.638128528Z" level=info msg="CreateContainer within sandbox \"074bc895e70caff5976bb38a9d04876bb493e8961317d51cb2a2f0d19d74759b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d5073fcf722eea8b4ad4584d0285700ccacba8e4cf7046cb0ad3314e0e2103a9\"" Jan 20 00:33:19.639240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067233937.mount: Deactivated successfully. Jan 20 00:33:19.639655 containerd[1468]: time="2026-01-20T00:33:19.639269211Z" level=info msg="StartContainer for \"d5073fcf722eea8b4ad4584d0285700ccacba8e4cf7046cb0ad3314e0e2103a9\"" Jan 20 00:33:19.658808 containerd[1468]: time="2026-01-20T00:33:19.658677420Z" level=info msg="CreateContainer within sandbox \"6d9ead44521d91500fa236a9307be2d23a55364b1e7004f973b1f3db1e1e6320\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e93cfa3e4a296180cd588d3e062eb184313b35876115cd6f6920ffe341b07bc\"" Jan 20 00:33:19.665549 containerd[1468]: time="2026-01-20T00:33:19.664748691Z" level=info msg="StartContainer for \"1e93cfa3e4a296180cd588d3e062eb184313b35876115cd6f6920ffe341b07bc\"" Jan 20 00:33:19.673725 systemd[1]: Started cri-containerd-6783fa5d678faec7c8df206ebf5bf65904bf33d1bccc4819460023a39bb5a709.scope - libcontainer container 6783fa5d678faec7c8df206ebf5bf65904bf33d1bccc4819460023a39bb5a709. Jan 20 00:33:19.698768 systemd[1]: Started cri-containerd-d5073fcf722eea8b4ad4584d0285700ccacba8e4cf7046cb0ad3314e0e2103a9.scope - libcontainer container d5073fcf722eea8b4ad4584d0285700ccacba8e4cf7046cb0ad3314e0e2103a9. Jan 20 00:33:19.721737 systemd[1]: Started cri-containerd-1e93cfa3e4a296180cd588d3e062eb184313b35876115cd6f6920ffe341b07bc.scope - libcontainer container 1e93cfa3e4a296180cd588d3e062eb184313b35876115cd6f6920ffe341b07bc. Jan 20 00:33:19.809922 containerd[1468]: time="2026-01-20T00:33:19.809690953Z" level=info msg="StartContainer for \"6783fa5d678faec7c8df206ebf5bf65904bf33d1bccc4819460023a39bb5a709\" returns successfully" Jan 20 00:33:19.822832 containerd[1468]: time="2026-01-20T00:33:19.822709586Z" level=info msg="StartContainer for \"d5073fcf722eea8b4ad4584d0285700ccacba8e4cf7046cb0ad3314e0e2103a9\" returns successfully" Jan 20 00:33:19.853762 containerd[1468]: time="2026-01-20T00:33:19.853725327Z" level=info msg="StartContainer for \"1e93cfa3e4a296180cd588d3e062eb184313b35876115cd6f6920ffe341b07bc\" returns successfully" Jan 20 00:33:20.831700 kubelet[2182]: E0120 00:33:20.831668 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:20.835056 kubelet[2182]: E0120 00:33:20.832692 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.835056 kubelet[2182]: E0120 00:33:20.834368 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:20.835056 kubelet[2182]: E0120 00:33:20.834697 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.835844 kubelet[2182]: E0120 00:33:20.835764 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:20.836595 kubelet[2182]: E0120 00:33:20.836144 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:20.995092 kubelet[2182]: I0120 00:33:20.995007 2182 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:22.403567 kubelet[2182]: E0120 00:33:22.402806 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:22.404621 kubelet[2182]: E0120 00:33:22.404063 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:22.412449 kubelet[2182]: E0120 00:33:22.411804 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:22.412449 kubelet[2182]: E0120 00:33:22.412030 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:22.423981 kubelet[2182]: E0120 00:33:22.423854 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:22.426549 kubelet[2182]: E0120 00:33:22.425567 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.050034 kubelet[2182]: E0120 00:33:23.049950 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:23.050350 kubelet[2182]: E0120 00:33:23.050284 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.060121 kubelet[2182]: E0120 00:33:23.055610 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:33:23.060121 kubelet[2182]: E0120 00:33:23.059737 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.648765 kubelet[2182]: E0120 00:33:23.648617 2182 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:33:23.787155 kubelet[2182]: E0120 00:33:23.786755 2182 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4929d1fac8d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:33:17.615122642 +0000 UTC m=+1.517510963,LastTimestamp:2026-01-20 00:33:17.615122642 +0000 UTC m=+1.517510963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:33:23.798580 kubelet[2182]: I0120 00:33:23.793128 2182 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:33:23.822684 kubelet[2182]: I0120 00:33:23.821775 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:23.828469 kubelet[2182]: I0120 00:33:23.827296 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:23.878557 kubelet[2182]: E0120 00:33:23.873097 2182 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4929d8ba0469 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:33:17.728318569 +0000 UTC m=+1.630706891,LastTimestamp:2026-01-20 00:33:17.728318569 +0000 UTC m=+1.630706891,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:33:23.886600 kubelet[2182]: E0120 00:33:23.882122 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:23.886600 kubelet[2182]: E0120 00:33:23.882383 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:23.886600 kubelet[2182]: E0120 00:33:23.885753 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:23.886600 kubelet[2182]: I0120 00:33:23.885888 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:23.894739 kubelet[2182]: E0120 00:33:23.894664 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:23.895018 kubelet[2182]: I0120 00:33:23.894738 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:23.906012 kubelet[2182]: E0120 00:33:23.905766 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:24.612319 kubelet[2182]: I0120 00:33:24.611820 2182 apiserver.go:52] "Watching apiserver" Jan 20 00:33:24.728138 kubelet[2182]: I0120 00:33:24.727889 2182 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:33:26.075926 kubelet[2182]: I0120 00:33:26.075860 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:26.103279 kubelet[2182]: E0120 00:33:26.103150 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:27.089605 kubelet[2182]: E0120 00:33:27.088006 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:27.992809 kubelet[2182]: I0120 00:33:27.992473 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.992356872 podStartE2EDuration="1.992356872s" podCreationTimestamp="2026-01-20 00:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:27.98708875 +0000 UTC m=+11.889477102" watchObservedRunningTime="2026-01-20 00:33:27.992356872 +0000 UTC m=+11.894745233" Jan 20 00:33:28.616878 kubelet[2182]: I0120 00:33:28.615940 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:28.643864 kubelet[2182]: E0120 00:33:28.643823 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:29.117631 kubelet[2182]: E0120 00:33:29.116162 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:29.715858 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-9.scope)... Jan 20 00:33:29.715910 systemd[1]: Reloading... Jan 20 00:33:30.038669 zram_generator::config[2515]: No configuration found. Jan 20 00:33:30.278259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:33:30.549727 systemd[1]: Reloading finished in 832 ms. Jan 20 00:33:30.697642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:30.727225 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:33:30.727808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:30.727894 systemd[1]: kubelet.service: Consumed 4.972s CPU time, 133.8M memory peak, 0B memory swap peak. Jan 20 00:33:30.741012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:33:31.131735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:33:31.157128 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:33:31.327842 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:31.327842 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:33:31.327842 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:33:31.330896 kubelet[2558]: I0120 00:33:31.330848 2558 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:33:31.371120 kubelet[2558]: I0120 00:33:31.370774 2558 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:33:31.371120 kubelet[2558]: I0120 00:33:31.370826 2558 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:33:31.372096 kubelet[2558]: I0120 00:33:31.371774 2558 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:33:31.381256 kubelet[2558]: I0120 00:33:31.376841 2558 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 00:33:31.383456 kubelet[2558]: I0120 00:33:31.382831 2558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:33:31.411465 kubelet[2558]: E0120 00:33:31.408012 2558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:33:31.411465 kubelet[2558]: I0120 00:33:31.408054 2558 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:33:31.426798 kubelet[2558]: I0120 00:33:31.426745 2558 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:33:31.429520 kubelet[2558]: I0120 00:33:31.427281 2558 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:33:31.429898 kubelet[2558]: I0120 00:33:31.429452 2558 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:33:31.429898 kubelet[2558]: I0120 00:33:31.429894 2558 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:33:31.430086 kubelet[2558]: I0120 00:33:31.429913 2558 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:33:31.430086 kubelet[2558]: I0120 00:33:31.429985 2558 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:31.432212 kubelet[2558]: I0120 00:33:31.431982 2558 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:33:31.432290 kubelet[2558]: I0120 00:33:31.432241 2558 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:33:31.434802 kubelet[2558]: I0120 00:33:31.434676 2558 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:33:31.437193 kubelet[2558]: I0120 00:33:31.434706 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:33:31.440848 kubelet[2558]: I0120 00:33:31.440742 2558 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:33:31.442796 kubelet[2558]: I0120 00:33:31.442369 2558 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:33:31.445807 kubelet[2558]: I0120 00:33:31.445760 2558 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:33:31.445934 kubelet[2558]: I0120 00:33:31.445836 2558 server.go:1287] "Started kubelet" Jan 20 00:33:31.447730 kubelet[2558]: I0120 00:33:31.446755 2558 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:33:31.450692 kubelet[2558]: I0120 00:33:31.450467 2558 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:33:31.455630 kubelet[2558]: I0120 00:33:31.455318 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:33:31.463572 kubelet[2558]: I0120 00:33:31.458842 2558 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:33:31.471873 kubelet[2558]: E0120 00:33:31.471843 2558 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:33:31.472786 kubelet[2558]: I0120 00:33:31.471922 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:33:31.473568 kubelet[2558]: I0120 00:33:31.472104 2558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:33:31.473759 kubelet[2558]: I0120 00:33:31.473745 2558 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:33:31.477187 kubelet[2558]: I0120 00:33:31.477110 2558 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:33:31.477705 kubelet[2558]: I0120 00:33:31.477657 2558 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:33:31.478089 kubelet[2558]: I0120 00:33:31.478022 2558 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:33:31.480869 kubelet[2558]: I0120 00:33:31.480838 2558 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:33:31.485238 kubelet[2558]: I0120 00:33:31.485134 2558 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:33:31.553283 kubelet[2558]: I0120 00:33:31.553195 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:33:31.560921 kubelet[2558]: I0120 00:33:31.560841 2558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:33:31.561469 kubelet[2558]: I0120 00:33:31.561129 2558 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:33:31.561469 kubelet[2558]: I0120 00:33:31.561253 2558 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:33:31.561469 kubelet[2558]: I0120 00:33:31.561269 2558 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:33:31.563002 kubelet[2558]: E0120 00:33:31.561721 2558 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:33:31.622638 kubelet[2558]: I0120 00:33:31.622568 2558 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:33:31.622638 kubelet[2558]: I0120 00:33:31.622596 2558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:33:31.622638 kubelet[2558]: I0120 00:33:31.622632 2558 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:33:31.624239 kubelet[2558]: I0120 00:33:31.623116 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:33:31.624239 kubelet[2558]: I0120 00:33:31.623147 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:33:31.624239 kubelet[2558]: I0120 00:33:31.623182 2558 policy_none.go:49] "None policy: Start" Jan 20 00:33:31.624239 kubelet[2558]: I0120 00:33:31.623198 2558 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:33:31.624239 kubelet[2558]: I0120 00:33:31.623220 2558 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:33:31.624239 kubelet[2558]: I0120 00:33:31.623458 2558 state_mem.go:75] "Updated machine memory state" Jan 20 00:33:31.646852 kubelet[2558]: I0120 00:33:31.645220 2558 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:33:31.646852 kubelet[2558]: I0120 00:33:31.645638 2558 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:33:31.646852 kubelet[2558]: I0120 00:33:31.645654 2558 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:33:31.649596 kubelet[2558]: I0120 00:33:31.647847 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:33:31.658761 kubelet[2558]: E0120 00:33:31.653692 2558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:33:31.668920 kubelet[2558]: I0120 00:33:31.668891 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:31.670471 kubelet[2558]: I0120 00:33:31.670242 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:31.672003 kubelet[2558]: I0120 00:33:31.671642 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:31.678151 kubelet[2558]: I0120 00:33:31.678119 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:31.683024 kubelet[2558]: I0120 00:33:31.682575 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:31.683024 kubelet[2558]: I0120 00:33:31.682654 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:31.683024 kubelet[2558]: I0120 00:33:31.682691 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:31.683024 kubelet[2558]: I0120 00:33:31.682721 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:31.683024 kubelet[2558]: I0120 00:33:31.682745 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:31.683373 kubelet[2558]: I0120 00:33:31.682775 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:31.683373 kubelet[2558]: I0120 00:33:31.682797 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:31.683373 kubelet[2558]: I0120 00:33:31.682867 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:31.708239 kubelet[2558]: E0120 00:33:31.707821 2558 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:33:31.714448 kubelet[2558]: E0120 00:33:31.714207 2558 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:31.784571 kubelet[2558]: I0120 00:33:31.783861 2558 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:33:31.832608 kubelet[2558]: I0120 00:33:31.831601 2558 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:33:31.832608 kubelet[2558]: I0120 00:33:31.831759 2558 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:33:32.020216 kubelet[2558]: E0120 00:33:32.013567 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.020216 kubelet[2558]: E0120 00:33:32.019689 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.020216 kubelet[2558]: E0120 00:33:32.019966 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.442154 kubelet[2558]: I0120 00:33:32.438932 2558 apiserver.go:52] "Watching apiserver" Jan 20 00:33:32.478084 kubelet[2558]: I0120 00:33:32.477987 2558 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:33:32.616804 kubelet[2558]: I0120 00:33:32.616202 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:32.623034 kubelet[2558]: I0120 00:33:32.616382 2558 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:32.623034 kubelet[2558]: E0120 00:33:32.617007 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.724965 kubelet[2558]: E0120 00:33:32.724291 2558 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:33:32.746599 kubelet[2558]: E0120 00:33:32.743224 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.760260 kubelet[2558]: E0120 00:33:32.760119 2558 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:33:32.771591 kubelet[2558]: E0120 00:33:32.765716 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:32.843732 kubelet[2558]: I0120 00:33:32.836030 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.835966021 podStartE2EDuration="4.835966021s" podCreationTimestamp="2026-01-20 00:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:32.773975005 +0000 UTC m=+1.589356859" watchObservedRunningTime="2026-01-20 00:33:32.835966021 +0000 UTC m=+1.651347885" Jan 20 00:33:32.852821 kubelet[2558]: I0120 00:33:32.851298 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.848256792 podStartE2EDuration="1.848256792s" podCreationTimestamp="2026-01-20 00:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:32.835720698 +0000 UTC m=+1.651102543" watchObservedRunningTime="2026-01-20 00:33:32.848256792 +0000 UTC m=+1.663638656" Jan 20 00:33:33.618281 kubelet[2558]: E0120 00:33:33.618032 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:33.618281 kubelet[2558]: E0120 00:33:33.618176 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:33.631561 kubelet[2558]: E0120 00:33:33.625932 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.034278 kubelet[2558]: I0120 00:33:34.032380 2558 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:33:34.039308 containerd[1468]: time="2026-01-20T00:33:34.039129132Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:33:34.040091 kubelet[2558]: I0120 00:33:34.039599 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:33:34.499880 systemd[1]: Created slice kubepods-besteffort-pod5be478e4_41a1_4194_a4eb_731b1b67541b.slice - libcontainer container kubepods-besteffort-pod5be478e4_41a1_4194_a4eb_731b1b67541b.slice. Jan 20 00:33:34.527871 kubelet[2558]: I0120 00:33:34.527821 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5be478e4-41a1-4194-a4eb-731b1b67541b-kube-proxy\") pod \"kube-proxy-tmbj8\" (UID: \"5be478e4-41a1-4194-a4eb-731b1b67541b\") " pod="kube-system/kube-proxy-tmbj8" Jan 20 00:33:34.527871 kubelet[2558]: I0120 00:33:34.527872 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5be478e4-41a1-4194-a4eb-731b1b67541b-lib-modules\") pod \"kube-proxy-tmbj8\" (UID: \"5be478e4-41a1-4194-a4eb-731b1b67541b\") " pod="kube-system/kube-proxy-tmbj8" Jan 20 00:33:34.528303 kubelet[2558]: I0120 00:33:34.527911 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5be478e4-41a1-4194-a4eb-731b1b67541b-xtables-lock\") pod \"kube-proxy-tmbj8\" (UID: \"5be478e4-41a1-4194-a4eb-731b1b67541b\") " pod="kube-system/kube-proxy-tmbj8" Jan 20 00:33:34.528303 kubelet[2558]: I0120 00:33:34.527938 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs2h\" (UniqueName: \"kubernetes.io/projected/5be478e4-41a1-4194-a4eb-731b1b67541b-kube-api-access-pjs2h\") pod \"kube-proxy-tmbj8\" (UID: \"5be478e4-41a1-4194-a4eb-731b1b67541b\") " pod="kube-system/kube-proxy-tmbj8" Jan 20 00:33:34.624261 kubelet[2558]: E0120 00:33:34.624228 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.834745 kubelet[2558]: E0120 00:33:34.832842 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:34.837264 containerd[1468]: time="2026-01-20T00:33:34.837065097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmbj8,Uid:5be478e4-41a1-4194-a4eb-731b1b67541b,Namespace:kube-system,Attempt:0,}" Jan 20 00:33:35.039233 kubelet[2558]: I0120 00:33:35.038083 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49b97792-ef28-402a-88e4-a6f18fc7d36f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vmtn2\" (UID: \"49b97792-ef28-402a-88e4-a6f18fc7d36f\") " pod="tigera-operator/tigera-operator-7dcd859c48-vmtn2" Jan 20 00:33:35.039233 kubelet[2558]: I0120 00:33:35.038212 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd8jc\" (UniqueName: \"kubernetes.io/projected/49b97792-ef28-402a-88e4-a6f18fc7d36f-kube-api-access-jd8jc\") pod \"tigera-operator-7dcd859c48-vmtn2\" (UID: \"49b97792-ef28-402a-88e4-a6f18fc7d36f\") " pod="tigera-operator/tigera-operator-7dcd859c48-vmtn2" Jan 20 00:33:35.050553 systemd[1]: Created slice kubepods-besteffort-pod49b97792_ef28_402a_88e4_a6f18fc7d36f.slice - libcontainer container kubepods-besteffort-pod49b97792_ef28_402a_88e4_a6f18fc7d36f.slice. Jan 20 00:33:35.167135 containerd[1468]: time="2026-01-20T00:33:35.166072582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:35.167135 containerd[1468]: time="2026-01-20T00:33:35.166306457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:35.167135 containerd[1468]: time="2026-01-20T00:33:35.166324520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:35.168844 containerd[1468]: time="2026-01-20T00:33:35.168688739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:35.228782 systemd[1]: Started cri-containerd-5ae6cfe92c144bfb672261b91ec2620872a9f5c60f00fd6de6c15b9077d8b96d.scope - libcontainer container 5ae6cfe92c144bfb672261b91ec2620872a9f5c60f00fd6de6c15b9077d8b96d. Jan 20 00:33:35.321950 containerd[1468]: time="2026-01-20T00:33:35.321769592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmbj8,Uid:5be478e4-41a1-4194-a4eb-731b1b67541b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae6cfe92c144bfb672261b91ec2620872a9f5c60f00fd6de6c15b9077d8b96d\"" Jan 20 00:33:35.323384 kubelet[2558]: E0120 00:33:35.323221 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:35.335828 kubelet[2558]: E0120 00:33:35.332468 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:35.368608 containerd[1468]: time="2026-01-20T00:33:35.365285374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vmtn2,Uid:49b97792-ef28-402a-88e4-a6f18fc7d36f,Namespace:tigera-operator,Attempt:0,}" Jan 20 00:33:35.369334 containerd[1468]: time="2026-01-20T00:33:35.369299758Z" level=info msg="CreateContainer within sandbox \"5ae6cfe92c144bfb672261b91ec2620872a9f5c60f00fd6de6c15b9077d8b96d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:33:35.514897 containerd[1468]: time="2026-01-20T00:33:35.514746620Z" level=info msg="CreateContainer within sandbox \"5ae6cfe92c144bfb672261b91ec2620872a9f5c60f00fd6de6c15b9077d8b96d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dbfb25b471e45641d8ad7ea082a209d2d7b3a8be1515ce4fa194862b1ade3870\"" Jan 20 00:33:35.519948 containerd[1468]: time="2026-01-20T00:33:35.517711184Z" level=info msg="StartContainer for \"dbfb25b471e45641d8ad7ea082a209d2d7b3a8be1515ce4fa194862b1ade3870\"" Jan 20 00:33:35.568262 containerd[1468]: time="2026-01-20T00:33:35.564603628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:33:35.568262 containerd[1468]: time="2026-01-20T00:33:35.564668790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:33:35.568262 containerd[1468]: time="2026-01-20T00:33:35.564741575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:35.568262 containerd[1468]: time="2026-01-20T00:33:35.564994166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:33:35.666320 kubelet[2558]: E0120 00:33:35.666285 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:35.691752 systemd[1]: Started cri-containerd-dbfb25b471e45641d8ad7ea082a209d2d7b3a8be1515ce4fa194862b1ade3870.scope - libcontainer container dbfb25b471e45641d8ad7ea082a209d2d7b3a8be1515ce4fa194862b1ade3870. Jan 20 00:33:35.755937 systemd[1]: Started cri-containerd-e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed.scope - libcontainer container e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed. Jan 20 00:33:35.814074 containerd[1468]: time="2026-01-20T00:33:35.813209746Z" level=info msg="StartContainer for \"dbfb25b471e45641d8ad7ea082a209d2d7b3a8be1515ce4fa194862b1ade3870\" returns successfully" Jan 20 00:33:35.865973 containerd[1468]: time="2026-01-20T00:33:35.864754484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vmtn2,Uid:49b97792-ef28-402a-88e4-a6f18fc7d36f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed\"" Jan 20 00:33:35.875015 containerd[1468]: time="2026-01-20T00:33:35.874670448Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 00:33:36.439189 kubelet[2558]: E0120 00:33:36.436049 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:36.678670 kubelet[2558]: E0120 00:33:36.677142 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:36.682590 kubelet[2558]: E0120 00:33:36.682391 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:36.946569 kubelet[2558]: E0120 00:33:36.945750 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.018182 kubelet[2558]: I0120 00:33:37.018074 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmbj8" podStartSLOduration=3.01805133 podStartE2EDuration="3.01805133s" podCreationTimestamp="2026-01-20 00:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:33:36.724553822 +0000 UTC m=+5.539935706" watchObservedRunningTime="2026-01-20 00:33:37.01805133 +0000 UTC m=+5.833433155" Jan 20 00:33:37.693723 kubelet[2558]: E0120 00:33:37.693314 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.701596 kubelet[2558]: E0120 00:33:37.698176 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:37.701596 kubelet[2558]: E0120 00:33:37.698626 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:38.702259 kubelet[2558]: E0120 00:33:38.701856 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:33:38.939331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815337919.mount: Deactivated successfully. Jan 20 00:33:42.167221 containerd[1468]: time="2026-01-20T00:33:42.167087185Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:42.169670 containerd[1468]: time="2026-01-20T00:33:42.169565037Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 00:33:42.172564 containerd[1468]: time="2026-01-20T00:33:42.172329834Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:42.181590 containerd[1468]: time="2026-01-20T00:33:42.181093789Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.306368629s" Jan 20 00:33:42.181590 containerd[1468]: time="2026-01-20T00:33:42.181145316Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 00:33:42.181590 containerd[1468]: time="2026-01-20T00:33:42.181540242Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:33:42.184932 containerd[1468]: time="2026-01-20T00:33:42.184790577Z" level=info msg="CreateContainer within sandbox \"e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 00:33:42.254866 containerd[1468]: time="2026-01-20T00:33:42.254769336Z" level=info msg="CreateContainer within sandbox \"e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585\"" Jan 20 00:33:42.258295 containerd[1468]: time="2026-01-20T00:33:42.255789940Z" level=info msg="StartContainer for \"fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585\"" Jan 20 00:33:42.351780 systemd[1]: Started cri-containerd-fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585.scope - libcontainer container fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585. Jan 20 00:33:42.434291 containerd[1468]: time="2026-01-20T00:33:42.433246175Z" level=info msg="StartContainer for \"fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585\" returns successfully" Jan 20 00:33:48.517571 systemd[1]: cri-containerd-fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585.scope: Deactivated successfully. Jan 20 00:33:48.524299 systemd[1]: cri-containerd-fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585.scope: Consumed 1.160s CPU time. Jan 20 00:33:48.631226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585-rootfs.mount: Deactivated successfully. Jan 20 00:33:48.727243 containerd[1468]: time="2026-01-20T00:33:48.720599957Z" level=info msg="shim disconnected" id=fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585 namespace=k8s.io Jan 20 00:33:48.727243 containerd[1468]: time="2026-01-20T00:33:48.727050587Z" level=warning msg="cleaning up after shim disconnected" id=fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585 namespace=k8s.io Jan 20 00:33:48.727243 containerd[1468]: time="2026-01-20T00:33:48.727080152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:33:48.857136 kubelet[2558]: I0120 00:33:48.852209 2558 scope.go:117] "RemoveContainer" containerID="fac83f129013f79867c0f3ee4d1c6d0a7d94ee726c34b0f9fe343263328a4585" Jan 20 00:33:48.864364 containerd[1468]: time="2026-01-20T00:33:48.862113225Z" level=info msg="CreateContainer within sandbox \"e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 00:33:48.933599 containerd[1468]: time="2026-01-20T00:33:48.931097984Z" level=info msg="CreateContainer within sandbox \"e00bdccb321d65479da5ab76ad9bf329d0a1c7a6e3adbf9069da4c022e1c36ed\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cc8c3bc0bef1ca954815dd2b33979ad25f816dc89f7cf2b1f1689cecf52000b7\"" Jan 20 00:33:48.933599 containerd[1468]: time="2026-01-20T00:33:48.932115472Z" level=info msg="StartContainer for \"cc8c3bc0bef1ca954815dd2b33979ad25f816dc89f7cf2b1f1689cecf52000b7\"" Jan 20 00:33:49.000951 systemd[1]: Started cri-containerd-cc8c3bc0bef1ca954815dd2b33979ad25f816dc89f7cf2b1f1689cecf52000b7.scope - libcontainer container cc8c3bc0bef1ca954815dd2b33979ad25f816dc89f7cf2b1f1689cecf52000b7. Jan 20 00:33:49.165928 containerd[1468]: time="2026-01-20T00:33:49.165770385Z" level=info msg="StartContainer for \"cc8c3bc0bef1ca954815dd2b33979ad25f816dc89f7cf2b1f1689cecf52000b7\" returns successfully" Jan 20 00:33:49.915644 kubelet[2558]: I0120 00:33:49.913348 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vmtn2" podStartSLOduration=9.602230478 podStartE2EDuration="15.913328438s" podCreationTimestamp="2026-01-20 00:33:34 +0000 UTC" firstStartedPulling="2026-01-20 00:33:35.87204324 +0000 UTC m=+4.687425074" lastFinishedPulling="2026-01-20 00:33:42.1831412 +0000 UTC m=+10.998523034" observedRunningTime="2026-01-20 00:33:42.783755047 +0000 UTC m=+11.599136881" watchObservedRunningTime="2026-01-20 00:33:49.913328438 +0000 UTC m=+18.728710262" Jan 20 00:33:52.910549 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 20 00:33:52.915921 sshd[1659]: pam_unix(sshd:session): session closed for user core Jan 20 00:33:52.928664 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:43382.service: Deactivated successfully. Jan 20 00:33:52.935717 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:33:52.936188 systemd[1]: session-9.scope: Consumed 7.938s CPU time, 160.2M memory peak, 0B memory swap peak. Jan 20 00:33:52.940249 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:33:52.955562 systemd-logind[1452]: Removed session 9. Jan 20 00:34:04.851259 systemd[1]: Created slice kubepods-besteffort-pode8f6bbea_68e5_4eda_ad11_84e37ef92945.slice - libcontainer container kubepods-besteffort-pode8f6bbea_68e5_4eda_ad11_84e37ef92945.slice. Jan 20 00:34:04.990870 kubelet[2558]: I0120 00:34:04.990712 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f6bbea-68e5-4eda-ad11-84e37ef92945-tigera-ca-bundle\") pod \"calico-typha-55fb9c7cbb-k6nc4\" (UID: \"e8f6bbea-68e5-4eda-ad11-84e37ef92945\") " pod="calico-system/calico-typha-55fb9c7cbb-k6nc4" Jan 20 00:34:04.990870 kubelet[2558]: I0120 00:34:04.990787 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wdf7\" (UniqueName: \"kubernetes.io/projected/e8f6bbea-68e5-4eda-ad11-84e37ef92945-kube-api-access-4wdf7\") pod \"calico-typha-55fb9c7cbb-k6nc4\" (UID: \"e8f6bbea-68e5-4eda-ad11-84e37ef92945\") " pod="calico-system/calico-typha-55fb9c7cbb-k6nc4" Jan 20 00:34:04.990870 kubelet[2558]: I0120 00:34:04.990808 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8f6bbea-68e5-4eda-ad11-84e37ef92945-typha-certs\") pod \"calico-typha-55fb9c7cbb-k6nc4\" (UID: \"e8f6bbea-68e5-4eda-ad11-84e37ef92945\") " pod="calico-system/calico-typha-55fb9c7cbb-k6nc4" Jan 20 00:34:05.079698 systemd[1]: Created slice kubepods-besteffort-pod839666c8_ba11_467f_95ac_0cbc0271d0b1.slice - libcontainer container kubepods-besteffort-pod839666c8_ba11_467f_95ac_0cbc0271d0b1.slice. Jan 20 00:34:05.186888 kubelet[2558]: E0120 00:34:05.186121 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:05.187782 containerd[1468]: time="2026-01-20T00:34:05.187696827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55fb9c7cbb-k6nc4,Uid:e8f6bbea-68e5-4eda-ad11-84e37ef92945,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:05.194164 kubelet[2558]: I0120 00:34:05.194007 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-lib-modules\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194164 kubelet[2558]: I0120 00:34:05.194107 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-xtables-lock\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194164 kubelet[2558]: I0120 00:34:05.194141 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-cni-log-dir\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194164 kubelet[2558]: I0120 00:34:05.194163 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-cni-net-dir\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194754 kubelet[2558]: I0120 00:34:05.194195 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-cni-bin-dir\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194754 kubelet[2558]: I0120 00:34:05.194220 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/839666c8-ba11-467f-95ac-0cbc0271d0b1-node-certs\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194754 kubelet[2558]: I0120 00:34:05.194244 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-var-lib-calico\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194754 kubelet[2558]: I0120 00:34:05.194271 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-var-run-calico\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194754 kubelet[2558]: I0120 00:34:05.194297 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8nh\" (UniqueName: \"kubernetes.io/projected/839666c8-ba11-467f-95ac-0cbc0271d0b1-kube-api-access-rg8nh\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194864 kubelet[2558]: I0120 00:34:05.194327 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/839666c8-ba11-467f-95ac-0cbc0271d0b1-tigera-ca-bundle\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194864 kubelet[2558]: I0120 00:34:05.194400 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-flexvol-driver-host\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.194864 kubelet[2558]: I0120 00:34:05.194434 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/839666c8-ba11-467f-95ac-0cbc0271d0b1-policysync\") pod \"calico-node-jkl2d\" (UID: \"839666c8-ba11-467f-95ac-0cbc0271d0b1\") " pod="calico-system/calico-node-jkl2d" Jan 20 00:34:05.270646 containerd[1468]: time="2026-01-20T00:34:05.267980772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:05.270646 containerd[1468]: time="2026-01-20T00:34:05.268264852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:05.270646 containerd[1468]: time="2026-01-20T00:34:05.268280521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:05.270646 containerd[1468]: time="2026-01-20T00:34:05.268672082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:05.295838 kubelet[2558]: E0120 00:34:05.295787 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:05.317428 kubelet[2558]: E0120 00:34:05.317240 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.317428 kubelet[2558]: W0120 00:34:05.317302 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.320236 kubelet[2558]: E0120 00:34:05.320155 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.321939 kubelet[2558]: E0120 00:34:05.321642 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.321939 kubelet[2558]: W0120 00:34:05.321684 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.321939 kubelet[2558]: E0120 00:34:05.321804 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.324258 kubelet[2558]: E0120 00:34:05.324226 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.324258 kubelet[2558]: W0120 00:34:05.324249 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.324397 kubelet[2558]: E0120 00:34:05.324269 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.329721 kubelet[2558]: E0120 00:34:05.329331 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.329721 kubelet[2558]: W0120 00:34:05.329380 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.329721 kubelet[2558]: E0120 00:34:05.329399 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.330088 kubelet[2558]: E0120 00:34:05.329896 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.330088 kubelet[2558]: W0120 00:34:05.329918 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.330088 kubelet[2558]: E0120 00:34:05.329932 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.330715 kubelet[2558]: E0120 00:34:05.330350 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.330715 kubelet[2558]: W0120 00:34:05.330374 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.330715 kubelet[2558]: E0120 00:34:05.330393 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.331126 kubelet[2558]: E0120 00:34:05.331076 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.331126 kubelet[2558]: W0120 00:34:05.331124 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.331226 kubelet[2558]: E0120 00:34:05.331140 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.331737 kubelet[2558]: E0120 00:34:05.331612 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.331737 kubelet[2558]: W0120 00:34:05.331663 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.331737 kubelet[2558]: E0120 00:34:05.331680 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.331860 systemd[1]: Started cri-containerd-f2b47d0b256ac7e1ccba09e0aaf651a35aa7e1fa8f8f6d596b0550d4519b1876.scope - libcontainer container f2b47d0b256ac7e1ccba09e0aaf651a35aa7e1fa8f8f6d596b0550d4519b1876. Jan 20 00:34:05.332930 kubelet[2558]: E0120 00:34:05.332861 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.332930 kubelet[2558]: W0120 00:34:05.332916 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.333116 kubelet[2558]: E0120 00:34:05.332935 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.334027 kubelet[2558]: E0120 00:34:05.333979 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.334027 kubelet[2558]: W0120 00:34:05.334017 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.334319 kubelet[2558]: E0120 00:34:05.334032 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.335049 kubelet[2558]: E0120 00:34:05.334584 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.335049 kubelet[2558]: W0120 00:34:05.334634 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.335049 kubelet[2558]: E0120 00:34:05.334649 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.335356 kubelet[2558]: E0120 00:34:05.335231 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.335356 kubelet[2558]: W0120 00:34:05.335286 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.335356 kubelet[2558]: E0120 00:34:05.335301 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.335984 kubelet[2558]: E0120 00:34:05.335777 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.335984 kubelet[2558]: W0120 00:34:05.335827 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.335984 kubelet[2558]: E0120 00:34:05.335840 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.336307 kubelet[2558]: E0120 00:34:05.336222 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.336307 kubelet[2558]: W0120 00:34:05.336237 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.336307 kubelet[2558]: E0120 00:34:05.336249 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.337768 kubelet[2558]: E0120 00:34:05.336682 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.337768 kubelet[2558]: W0120 00:34:05.336697 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.337768 kubelet[2558]: E0120 00:34:05.336708 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.337768 kubelet[2558]: E0120 00:34:05.337171 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.337768 kubelet[2558]: W0120 00:34:05.337183 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.337768 kubelet[2558]: E0120 00:34:05.337196 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.338768 kubelet[2558]: E0120 00:34:05.338589 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.338768 kubelet[2558]: W0120 00:34:05.338609 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.338768 kubelet[2558]: E0120 00:34:05.338626 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.340213 kubelet[2558]: E0120 00:34:05.340115 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.340213 kubelet[2558]: W0120 00:34:05.340157 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.340213 kubelet[2558]: E0120 00:34:05.340169 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.340665 kubelet[2558]: E0120 00:34:05.340557 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.340776 kubelet[2558]: W0120 00:34:05.340671 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.340776 kubelet[2558]: E0120 00:34:05.340689 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.341181 kubelet[2558]: E0120 00:34:05.341072 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.341300 kubelet[2558]: W0120 00:34:05.341091 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.341300 kubelet[2558]: E0120 00:34:05.341242 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.341854 kubelet[2558]: E0120 00:34:05.341638 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.341854 kubelet[2558]: W0120 00:34:05.341654 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.341854 kubelet[2558]: E0120 00:34:05.341664 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.343030 kubelet[2558]: E0120 00:34:05.342122 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.343030 kubelet[2558]: W0120 00:34:05.342136 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.343030 kubelet[2558]: E0120 00:34:05.342145 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.343030 kubelet[2558]: E0120 00:34:05.342882 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.343030 kubelet[2558]: W0120 00:34:05.342892 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.343030 kubelet[2558]: E0120 00:34:05.342901 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.385331 kubelet[2558]: E0120 00:34:05.385290 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:05.386114 containerd[1468]: time="2026-01-20T00:34:05.386010499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jkl2d,Uid:839666c8-ba11-467f-95ac-0cbc0271d0b1,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:05.397131 kubelet[2558]: E0120 00:34:05.396208 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.397131 kubelet[2558]: W0120 00:34:05.396259 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.397131 kubelet[2558]: E0120 00:34:05.396286 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.397131 kubelet[2558]: I0120 00:34:05.396346 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d7f8729-92e8-466b-ac93-b93fcaadeb7a-varrun\") pod \"csi-node-driver-wkvnv\" (UID: \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\") " pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:05.397131 kubelet[2558]: E0120 00:34:05.397086 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.397131 kubelet[2558]: W0120 00:34:05.397099 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.397638 kubelet[2558]: E0120 00:34:05.397208 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.397638 kubelet[2558]: I0120 00:34:05.397317 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d7f8729-92e8-466b-ac93-b93fcaadeb7a-kubelet-dir\") pod \"csi-node-driver-wkvnv\" (UID: \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\") " pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:05.398872 kubelet[2558]: E0120 00:34:05.398109 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.398872 kubelet[2558]: W0120 00:34:05.398133 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.398872 kubelet[2558]: E0120 00:34:05.398212 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.398872 kubelet[2558]: I0120 00:34:05.398603 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d7f8729-92e8-466b-ac93-b93fcaadeb7a-socket-dir\") pod \"csi-node-driver-wkvnv\" (UID: \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\") " pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:05.400811 kubelet[2558]: E0120 00:34:05.399647 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.400811 kubelet[2558]: W0120 00:34:05.399663 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.400811 kubelet[2558]: E0120 00:34:05.399715 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.401834 kubelet[2558]: E0120 00:34:05.401769 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.401834 kubelet[2558]: W0120 00:34:05.401785 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.402116 kubelet[2558]: E0120 00:34:05.401912 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.402416 kubelet[2558]: E0120 00:34:05.402361 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.402416 kubelet[2558]: W0120 00:34:05.402386 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.402831 kubelet[2558]: E0120 00:34:05.402540 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.403767 kubelet[2558]: E0120 00:34:05.403059 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.403767 kubelet[2558]: W0120 00:34:05.403106 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.403767 kubelet[2558]: E0120 00:34:05.403285 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.403767 kubelet[2558]: I0120 00:34:05.403310 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7pk\" (UniqueName: \"kubernetes.io/projected/2d7f8729-92e8-466b-ac93-b93fcaadeb7a-kube-api-access-jg7pk\") pod \"csi-node-driver-wkvnv\" (UID: \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\") " pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:05.403767 kubelet[2558]: E0120 00:34:05.403564 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.403767 kubelet[2558]: W0120 00:34:05.403575 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.403767 kubelet[2558]: E0120 00:34:05.403730 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.404702 kubelet[2558]: E0120 00:34:05.404353 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.404702 kubelet[2558]: W0120 00:34:05.404367 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.404702 kubelet[2558]: E0120 00:34:05.404378 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.404886 kubelet[2558]: E0120 00:34:05.404873 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.404928 kubelet[2558]: W0120 00:34:05.404888 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.404928 kubelet[2558]: E0120 00:34:05.404912 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.406595 kubelet[2558]: E0120 00:34:05.405344 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.406595 kubelet[2558]: W0120 00:34:05.405363 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.406595 kubelet[2558]: E0120 00:34:05.405375 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.406595 kubelet[2558]: E0120 00:34:05.405807 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.406595 kubelet[2558]: W0120 00:34:05.405817 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.406595 kubelet[2558]: E0120 00:34:05.405826 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.406595 kubelet[2558]: I0120 00:34:05.405843 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d7f8729-92e8-466b-ac93-b93fcaadeb7a-registration-dir\") pod \"csi-node-driver-wkvnv\" (UID: \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\") " pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:05.407402 kubelet[2558]: E0120 00:34:05.406763 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.407402 kubelet[2558]: W0120 00:34:05.406777 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.407402 kubelet[2558]: E0120 00:34:05.406790 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.408117 kubelet[2558]: E0120 00:34:05.408017 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.408117 kubelet[2558]: W0120 00:34:05.408032 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.408117 kubelet[2558]: E0120 00:34:05.408042 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.408770 kubelet[2558]: E0120 00:34:05.408642 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.408770 kubelet[2558]: W0120 00:34:05.408656 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.408770 kubelet[2558]: E0120 00:34:05.408720 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.441871 containerd[1468]: time="2026-01-20T00:34:05.440923010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:05.441871 containerd[1468]: time="2026-01-20T00:34:05.441200408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:05.441871 containerd[1468]: time="2026-01-20T00:34:05.441290806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:05.444610 containerd[1468]: time="2026-01-20T00:34:05.444381976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:05.504798 systemd[1]: Started cri-containerd-c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c.scope - libcontainer container c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c. Jan 20 00:34:05.506762 containerd[1468]: time="2026-01-20T00:34:05.506384172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55fb9c7cbb-k6nc4,Uid:e8f6bbea-68e5-4eda-ad11-84e37ef92945,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2b47d0b256ac7e1ccba09e0aaf651a35aa7e1fa8f8f6d596b0550d4519b1876\"" Jan 20 00:34:05.509117 kubelet[2558]: E0120 00:34:05.509035 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.509117 kubelet[2558]: W0120 00:34:05.509091 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.509117 kubelet[2558]: E0120 00:34:05.509115 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.510086 kubelet[2558]: E0120 00:34:05.509648 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.510086 kubelet[2558]: W0120 00:34:05.509666 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.510086 kubelet[2558]: E0120 00:34:05.509684 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.510289 kubelet[2558]: E0120 00:34:05.510249 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.510289 kubelet[2558]: W0120 00:34:05.510264 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.510356 kubelet[2558]: E0120 00:34:05.510286 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.512663 kubelet[2558]: E0120 00:34:05.511774 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.512663 kubelet[2558]: W0120 00:34:05.511829 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.512663 kubelet[2558]: E0120 00:34:05.511855 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.512663 kubelet[2558]: E0120 00:34:05.512607 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.512663 kubelet[2558]: W0120 00:34:05.512621 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.512663 kubelet[2558]: E0120 00:34:05.512642 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.513275 kubelet[2558]: E0120 00:34:05.513153 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.513275 kubelet[2558]: W0120 00:34:05.513173 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.513748 kubelet[2558]: E0120 00:34:05.513294 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.513748 kubelet[2558]: E0120 00:34:05.513612 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.513748 kubelet[2558]: W0120 00:34:05.513625 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.514162 kubelet[2558]: E0120 00:34:05.513758 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.514162 kubelet[2558]: E0120 00:34:05.513922 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.514162 kubelet[2558]: W0120 00:34:05.513935 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.514162 kubelet[2558]: E0120 00:34:05.514091 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.514287 kubelet[2558]: E0120 00:34:05.514255 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.514287 kubelet[2558]: W0120 00:34:05.514267 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.515386 kubelet[2558]: E0120 00:34:05.514420 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.515386 kubelet[2558]: E0120 00:34:05.514690 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.515386 kubelet[2558]: W0120 00:34:05.514702 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.515386 kubelet[2558]: E0120 00:34:05.514821 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.515386 kubelet[2558]: E0120 00:34:05.514984 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.515386 kubelet[2558]: W0120 00:34:05.514996 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.515386 kubelet[2558]: E0120 00:34:05.515143 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.515715 kubelet[2558]: E0120 00:34:05.515621 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.515715 kubelet[2558]: W0120 00:34:05.515634 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.515830 kubelet[2558]: E0120 00:34:05.515796 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.516233 kubelet[2558]: E0120 00:34:05.516112 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.516233 kubelet[2558]: W0120 00:34:05.516164 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.516319 kubelet[2558]: E0120 00:34:05.516287 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.517784 kubelet[2558]: E0120 00:34:05.516758 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.517784 kubelet[2558]: W0120 00:34:05.516775 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.517784 kubelet[2558]: E0120 00:34:05.517061 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.517784 kubelet[2558]: E0120 00:34:05.517246 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.517784 kubelet[2558]: W0120 00:34:05.517259 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.517784 kubelet[2558]: E0120 00:34:05.517568 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.517784 kubelet[2558]: E0120 00:34:05.517766 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.517784 kubelet[2558]: W0120 00:34:05.517779 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.519212 kubelet[2558]: E0120 00:34:05.518288 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.519212 kubelet[2558]: E0120 00:34:05.518579 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:05.519773 kubelet[2558]: E0120 00:34:05.519747 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.519773 kubelet[2558]: W0120 00:34:05.519764 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.520346 containerd[1468]: time="2026-01-20T00:34:05.520066293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 00:34:05.520406 kubelet[2558]: E0120 00:34:05.520211 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.520406 kubelet[2558]: E0120 00:34:05.520314 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.520406 kubelet[2558]: W0120 00:34:05.520325 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.521682 kubelet[2558]: E0120 00:34:05.521038 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.521682 kubelet[2558]: E0120 00:34:05.521324 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.521682 kubelet[2558]: W0120 00:34:05.521355 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.521948 kubelet[2558]: E0120 00:34:05.521916 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.522884 kubelet[2558]: E0120 00:34:05.522672 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.522884 kubelet[2558]: W0120 00:34:05.522731 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.523299 kubelet[2558]: E0120 00:34:05.523217 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.524757 kubelet[2558]: E0120 00:34:05.523997 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.524757 kubelet[2558]: W0120 00:34:05.524021 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.524757 kubelet[2558]: E0120 00:34:05.524707 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.525617 kubelet[2558]: E0120 00:34:05.525303 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.525617 kubelet[2558]: W0120 00:34:05.525320 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.526346 kubelet[2558]: E0120 00:34:05.526038 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.528578 kubelet[2558]: E0120 00:34:05.528207 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.528578 kubelet[2558]: W0120 00:34:05.528234 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.528765 kubelet[2558]: E0120 00:34:05.528750 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.528832 kubelet[2558]: W0120 00:34:05.528818 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.528920 kubelet[2558]: E0120 00:34:05.528896 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.529146 kubelet[2558]: E0120 00:34:05.529126 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.531325 kubelet[2558]: E0120 00:34:05.531236 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.531325 kubelet[2558]: W0120 00:34:05.531263 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.531325 kubelet[2558]: E0120 00:34:05.531286 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.543327 kubelet[2558]: E0120 00:34:05.543254 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:05.543327 kubelet[2558]: W0120 00:34:05.543316 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:05.543418 kubelet[2558]: E0120 00:34:05.543340 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:05.593609 containerd[1468]: time="2026-01-20T00:34:05.593318145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jkl2d,Uid:839666c8-ba11-467f-95ac-0cbc0271d0b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\"" Jan 20 00:34:05.594317 kubelet[2558]: E0120 00:34:05.594204 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:06.432326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003568939.mount: Deactivated successfully. Jan 20 00:34:07.235792 containerd[1468]: time="2026-01-20T00:34:07.235667908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:07.236888 containerd[1468]: time="2026-01-20T00:34:07.236813915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 00:34:07.238743 containerd[1468]: time="2026-01-20T00:34:07.238650913Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:07.241047 containerd[1468]: time="2026-01-20T00:34:07.240960229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:07.242379 containerd[1468]: time="2026-01-20T00:34:07.242181855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.721201154s" Jan 20 00:34:07.242379 containerd[1468]: time="2026-01-20T00:34:07.242254841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 00:34:07.247748 containerd[1468]: time="2026-01-20T00:34:07.247683732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:34:07.286168 containerd[1468]: time="2026-01-20T00:34:07.285990336Z" level=info msg="CreateContainer within sandbox \"f2b47d0b256ac7e1ccba09e0aaf651a35aa7e1fa8f8f6d596b0550d4519b1876\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 00:34:07.315001 containerd[1468]: time="2026-01-20T00:34:07.314883984Z" level=info msg="CreateContainer within sandbox \"f2b47d0b256ac7e1ccba09e0aaf651a35aa7e1fa8f8f6d596b0550d4519b1876\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fa2c512161da6e1c52d82f1ad0a353345bc1a9ec58b4ed63d11e7ebec57f94d9\"" Jan 20 00:34:07.322200 containerd[1468]: time="2026-01-20T00:34:07.322107065Z" level=info msg="StartContainer for \"fa2c512161da6e1c52d82f1ad0a353345bc1a9ec58b4ed63d11e7ebec57f94d9\"" Jan 20 00:34:07.403769 systemd[1]: Started cri-containerd-fa2c512161da6e1c52d82f1ad0a353345bc1a9ec58b4ed63d11e7ebec57f94d9.scope - libcontainer container fa2c512161da6e1c52d82f1ad0a353345bc1a9ec58b4ed63d11e7ebec57f94d9. Jan 20 00:34:07.483132 containerd[1468]: time="2026-01-20T00:34:07.482950904Z" level=info msg="StartContainer for \"fa2c512161da6e1c52d82f1ad0a353345bc1a9ec58b4ed63d11e7ebec57f94d9\" returns successfully" Jan 20 00:34:07.594374 kubelet[2558]: E0120 00:34:07.594054 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:08.145097 kubelet[2558]: E0120 00:34:08.144851 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:08.178586 kubelet[2558]: I0120 00:34:08.175934 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55fb9c7cbb-k6nc4" podStartSLOduration=2.446712205 podStartE2EDuration="4.174697072s" podCreationTimestamp="2026-01-20 00:34:04 +0000 UTC" firstStartedPulling="2026-01-20 00:34:05.519420116 +0000 UTC m=+34.334801941" lastFinishedPulling="2026-01-20 00:34:07.247404974 +0000 UTC m=+36.062786808" observedRunningTime="2026-01-20 00:34:08.170818372 +0000 UTC m=+36.986200216" watchObservedRunningTime="2026-01-20 00:34:08.174697072 +0000 UTC m=+36.990078896" Jan 20 00:34:08.189330 kubelet[2558]: E0120 00:34:08.188733 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.189330 kubelet[2558]: W0120 00:34:08.188759 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.189330 kubelet[2558]: E0120 00:34:08.188787 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.190153 kubelet[2558]: E0120 00:34:08.190115 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.190212 kubelet[2558]: W0120 00:34:08.190158 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.190212 kubelet[2558]: E0120 00:34:08.190181 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.190759 kubelet[2558]: E0120 00:34:08.190714 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.191696 kubelet[2558]: W0120 00:34:08.191665 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.191794 kubelet[2558]: E0120 00:34:08.191755 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.192391 kubelet[2558]: E0120 00:34:08.192269 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.192391 kubelet[2558]: W0120 00:34:08.192282 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.192391 kubelet[2558]: E0120 00:34:08.192297 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.194044 kubelet[2558]: E0120 00:34:08.193909 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.194177 kubelet[2558]: W0120 00:34:08.194051 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.194177 kubelet[2558]: E0120 00:34:08.194069 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.195178 kubelet[2558]: E0120 00:34:08.195017 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.195178 kubelet[2558]: W0120 00:34:08.195066 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.195178 kubelet[2558]: E0120 00:34:08.195081 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.196617 kubelet[2558]: E0120 00:34:08.196094 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.196617 kubelet[2558]: W0120 00:34:08.196109 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.196617 kubelet[2558]: E0120 00:34:08.196124 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.196820 kubelet[2558]: E0120 00:34:08.196804 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.197376 kubelet[2558]: W0120 00:34:08.196820 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.197376 kubelet[2558]: E0120 00:34:08.196835 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.197727 kubelet[2558]: E0120 00:34:08.197620 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.197830 kubelet[2558]: W0120 00:34:08.197784 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.197949 kubelet[2558]: E0120 00:34:08.197804 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.198688 kubelet[2558]: E0120 00:34:08.198641 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.198796 kubelet[2558]: W0120 00:34:08.198764 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.198796 kubelet[2558]: E0120 00:34:08.198779 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.199869 kubelet[2558]: E0120 00:34:08.199823 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.199869 kubelet[2558]: W0120 00:34:08.199839 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.199869 kubelet[2558]: E0120 00:34:08.199856 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.200658 kubelet[2558]: E0120 00:34:08.200421 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.200658 kubelet[2558]: W0120 00:34:08.200578 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.200658 kubelet[2558]: E0120 00:34:08.200594 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.202362 kubelet[2558]: E0120 00:34:08.202317 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.202362 kubelet[2558]: W0120 00:34:08.202335 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.202362 kubelet[2558]: E0120 00:34:08.202351 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.203560 kubelet[2558]: E0120 00:34:08.203373 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.203560 kubelet[2558]: W0120 00:34:08.203390 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.203560 kubelet[2558]: E0120 00:34:08.203404 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.204110 kubelet[2558]: E0120 00:34:08.203855 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.204110 kubelet[2558]: W0120 00:34:08.203871 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.204110 kubelet[2558]: E0120 00:34:08.203884 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.241131 kubelet[2558]: E0120 00:34:08.241032 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.241131 kubelet[2558]: W0120 00:34:08.241093 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.241131 kubelet[2558]: E0120 00:34:08.241120 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.241895 kubelet[2558]: E0120 00:34:08.241798 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.241895 kubelet[2558]: W0120 00:34:08.241858 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.241895 kubelet[2558]: E0120 00:34:08.241890 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.242569 kubelet[2558]: E0120 00:34:08.242391 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.242569 kubelet[2558]: W0120 00:34:08.242442 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.242675 kubelet[2558]: E0120 00:34:08.242459 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.244876 kubelet[2558]: E0120 00:34:08.244165 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.244876 kubelet[2558]: W0120 00:34:08.244190 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.244876 kubelet[2558]: E0120 00:34:08.244223 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.245014 kubelet[2558]: E0120 00:34:08.244978 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.245014 kubelet[2558]: W0120 00:34:08.244992 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.245165 kubelet[2558]: E0120 00:34:08.245065 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.245861 kubelet[2558]: E0120 00:34:08.245599 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.245861 kubelet[2558]: W0120 00:34:08.245621 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.245861 kubelet[2558]: E0120 00:34:08.245784 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.246270 kubelet[2558]: E0120 00:34:08.246179 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.246270 kubelet[2558]: W0120 00:34:08.246222 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.246766 kubelet[2558]: E0120 00:34:08.246462 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.247090 kubelet[2558]: E0120 00:34:08.247015 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.247090 kubelet[2558]: W0120 00:34:08.247065 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.247165 kubelet[2558]: E0120 00:34:08.247122 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.247703 kubelet[2558]: E0120 00:34:08.247638 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.247703 kubelet[2558]: W0120 00:34:08.247691 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.247775 kubelet[2558]: E0120 00:34:08.247742 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.248223 kubelet[2558]: E0120 00:34:08.248158 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.248223 kubelet[2558]: W0120 00:34:08.248207 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.248288 kubelet[2558]: E0120 00:34:08.248254 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.248944 kubelet[2558]: E0120 00:34:08.248881 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.248944 kubelet[2558]: W0120 00:34:08.248926 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.249189 kubelet[2558]: E0120 00:34:08.249118 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.249580 kubelet[2558]: E0120 00:34:08.249539 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.249580 kubelet[2558]: W0120 00:34:08.249576 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.249699 kubelet[2558]: E0120 00:34:08.249655 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.250096 kubelet[2558]: E0120 00:34:08.250032 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.250096 kubelet[2558]: W0120 00:34:08.250077 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.250216 kubelet[2558]: E0120 00:34:08.250186 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.250607 kubelet[2558]: E0120 00:34:08.250546 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.250607 kubelet[2558]: W0120 00:34:08.250589 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.250671 kubelet[2558]: E0120 00:34:08.250628 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.251150 kubelet[2558]: E0120 00:34:08.251084 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.251150 kubelet[2558]: W0120 00:34:08.251126 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.251208 kubelet[2558]: E0120 00:34:08.251167 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.251825 kubelet[2558]: E0120 00:34:08.251757 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.251825 kubelet[2558]: W0120 00:34:08.251811 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.251886 kubelet[2558]: E0120 00:34:08.251845 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.252299 kubelet[2558]: E0120 00:34:08.252259 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.252299 kubelet[2558]: W0120 00:34:08.252294 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.252364 kubelet[2558]: E0120 00:34:08.252338 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.252997 kubelet[2558]: E0120 00:34:08.252889 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:34:08.252997 kubelet[2558]: W0120 00:34:08.252925 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:34:08.252997 kubelet[2558]: E0120 00:34:08.252936 2558 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:34:08.435048 containerd[1468]: time="2026-01-20T00:34:08.434796825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:08.436114 containerd[1468]: time="2026-01-20T00:34:08.436034138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 00:34:08.438917 containerd[1468]: time="2026-01-20T00:34:08.438821814Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:08.441870 containerd[1468]: time="2026-01-20T00:34:08.441800539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:08.442724 containerd[1468]: time="2026-01-20T00:34:08.442624197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.194870034s" Jan 20 00:34:08.442724 containerd[1468]: time="2026-01-20T00:34:08.442681994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 00:34:08.445915 containerd[1468]: time="2026-01-20T00:34:08.445677906Z" level=info msg="CreateContainer within sandbox \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:34:08.495965 containerd[1468]: time="2026-01-20T00:34:08.495873393Z" level=info msg="CreateContainer within sandbox \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb\"" Jan 20 00:34:08.496985 containerd[1468]: time="2026-01-20T00:34:08.496847263Z" level=info msg="StartContainer for \"2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb\"" Jan 20 00:34:08.554355 systemd[1]: Started cri-containerd-2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb.scope - libcontainer container 2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb. Jan 20 00:34:08.616764 containerd[1468]: time="2026-01-20T00:34:08.616693274Z" level=info msg="StartContainer for \"2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb\" returns successfully" Jan 20 00:34:08.629338 systemd[1]: cri-containerd-2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb.scope: Deactivated successfully. Jan 20 00:34:08.812293 containerd[1468]: time="2026-01-20T00:34:08.812132733Z" level=info msg="shim disconnected" id=2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb namespace=k8s.io Jan 20 00:34:08.812293 containerd[1468]: time="2026-01-20T00:34:08.812174281Z" level=warning msg="cleaning up after shim disconnected" id=2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb namespace=k8s.io Jan 20 00:34:08.812293 containerd[1468]: time="2026-01-20T00:34:08.812183067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:34:09.153952 kubelet[2558]: E0120 00:34:09.152761 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:09.155140 kubelet[2558]: E0120 00:34:09.154808 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:09.157276 containerd[1468]: time="2026-01-20T00:34:09.156434632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:34:09.299054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aeb1f9cf08f001b0aecf11127bc4b55104680e15277fbbf5f2bb7d895b93ebb-rootfs.mount: Deactivated successfully. Jan 20 00:34:09.575017 kubelet[2558]: E0120 00:34:09.574915 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:10.157369 kubelet[2558]: E0120 00:34:10.157090 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:11.562353 kubelet[2558]: E0120 00:34:11.562302 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:11.981275 containerd[1468]: time="2026-01-20T00:34:11.981203011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:11.982352 containerd[1468]: time="2026-01-20T00:34:11.982234475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 00:34:11.984945 containerd[1468]: time="2026-01-20T00:34:11.984792296Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:11.988144 containerd[1468]: time="2026-01-20T00:34:11.987816979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:11.988940 containerd[1468]: time="2026-01-20T00:34:11.988891268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.83185071s" Jan 20 00:34:11.988991 containerd[1468]: time="2026-01-20T00:34:11.988948282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 00:34:11.992853 containerd[1468]: time="2026-01-20T00:34:11.992793893Z" level=info msg="CreateContainer within sandbox \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:34:12.016426 containerd[1468]: time="2026-01-20T00:34:12.016303425Z" level=info msg="CreateContainer within sandbox \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245\"" Jan 20 00:34:12.019194 containerd[1468]: time="2026-01-20T00:34:12.017403100Z" level=info msg="StartContainer for \"165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245\"" Jan 20 00:34:12.061713 systemd[1]: Started cri-containerd-165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245.scope - libcontainer container 165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245. Jan 20 00:34:12.104867 containerd[1468]: time="2026-01-20T00:34:12.104755291Z" level=info msg="StartContainer for \"165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245\" returns successfully" Jan 20 00:34:12.174282 kubelet[2558]: E0120 00:34:12.174252 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:13.183726 kubelet[2558]: E0120 00:34:13.182855 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:13.187882 systemd[1]: cri-containerd-165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245.scope: Deactivated successfully. Jan 20 00:34:13.188355 systemd[1]: cri-containerd-165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245.scope: Consumed 1.314s CPU time. Jan 20 00:34:13.225198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245-rootfs.mount: Deactivated successfully. Jan 20 00:34:13.231965 containerd[1468]: time="2026-01-20T00:34:13.231843207Z" level=info msg="shim disconnected" id=165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245 namespace=k8s.io Jan 20 00:34:13.231965 containerd[1468]: time="2026-01-20T00:34:13.231912446Z" level=warning msg="cleaning up after shim disconnected" id=165a5b51f691a20d575e178b124b099273c17e25b1a055aecce13f8dd46e5245 namespace=k8s.io Jan 20 00:34:13.231965 containerd[1468]: time="2026-01-20T00:34:13.231924799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:34:13.241573 kubelet[2558]: I0120 00:34:13.238384 2558 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:34:13.299138 containerd[1468]: time="2026-01-20T00:34:13.298773333Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:34:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:34:13.324725 kubelet[2558]: W0120 00:34:13.324583 2558 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jan 20 00:34:13.324725 kubelet[2558]: E0120 00:34:13.324622 2558 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 00:34:13.325380 kubelet[2558]: W0120 00:34:13.325364 2558 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 20 00:34:13.325552 kubelet[2558]: E0120 00:34:13.325387 2558 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 00:34:13.325552 kubelet[2558]: W0120 00:34:13.325433 2558 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 20 00:34:13.325552 kubelet[2558]: E0120 00:34:13.325542 2558 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 00:34:13.325967 kubelet[2558]: W0120 00:34:13.325739 2558 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jan 20 00:34:13.325967 kubelet[2558]: E0120 00:34:13.325786 2558 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 00:34:13.325967 kubelet[2558]: W0120 00:34:13.325816 2558 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 20 00:34:13.325967 kubelet[2558]: E0120 00:34:13.325824 2558 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 20 00:34:13.332364 systemd[1]: Created slice kubepods-besteffort-pod3b5af3c4_8bd3_4827_8284_26abb85feced.slice - libcontainer container kubepods-besteffort-pod3b5af3c4_8bd3_4827_8284_26abb85feced.slice. Jan 20 00:34:13.342253 systemd[1]: Created slice kubepods-besteffort-poda0b6979d_ad60_4bcc_b38c_f806a4b1dd2c.slice - libcontainer container kubepods-besteffort-poda0b6979d_ad60_4bcc_b38c_f806a4b1dd2c.slice. Jan 20 00:34:13.349209 kubelet[2558]: I0120 00:34:13.349096 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffxd\" (UniqueName: \"kubernetes.io/projected/0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84-kube-api-access-pffxd\") pod \"calico-apiserver-78c5dffbd-t9x7r\" (UID: \"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84\") " pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" Jan 20 00:34:13.349209 kubelet[2558]: I0120 00:34:13.349184 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6r6\" (UniqueName: \"kubernetes.io/projected/7b39ba6d-0875-4d35-90a8-c9d91492b367-kube-api-access-ns6r6\") pod \"coredns-668d6bf9bc-7zhgb\" (UID: \"7b39ba6d-0875-4d35-90a8-c9d91492b367\") " pod="kube-system/coredns-668d6bf9bc-7zhgb" Jan 20 00:34:13.349661 kubelet[2558]: I0120 00:34:13.349216 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84-calico-apiserver-certs\") pod \"calico-apiserver-78c5dffbd-t9x7r\" (UID: \"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84\") " pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" Jan 20 00:34:13.349661 kubelet[2558]: I0120 00:34:13.349241 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b39ba6d-0875-4d35-90a8-c9d91492b367-config-volume\") pod \"coredns-668d6bf9bc-7zhgb\" (UID: \"7b39ba6d-0875-4d35-90a8-c9d91492b367\") " pod="kube-system/coredns-668d6bf9bc-7zhgb" Jan 20 00:34:13.354961 systemd[1]: Created slice kubepods-besteffort-pod96693105_0319_44f2_a458_134dbd8dc9b8.slice - libcontainer container kubepods-besteffort-pod96693105_0319_44f2_a458_134dbd8dc9b8.slice. Jan 20 00:34:13.374030 systemd[1]: Created slice kubepods-burstable-pod5393b5c7_b838_40b7_b5c7_c21832b5d797.slice - libcontainer container kubepods-burstable-pod5393b5c7_b838_40b7_b5c7_c21832b5d797.slice. Jan 20 00:34:13.383598 systemd[1]: Created slice kubepods-besteffort-pod0d6e1086_0ac8_4c92_bb35_cbe08d4a2e84.slice - libcontainer container kubepods-besteffort-pod0d6e1086_0ac8_4c92_bb35_cbe08d4a2e84.slice. Jan 20 00:34:13.390185 systemd[1]: Created slice kubepods-besteffort-pod4fd51efe_cc95_4265_995a_08b13dbea3b1.slice - libcontainer container kubepods-besteffort-pod4fd51efe_cc95_4265_995a_08b13dbea3b1.slice. Jan 20 00:34:13.399698 systemd[1]: Created slice kubepods-burstable-pod7b39ba6d_0875_4d35_90a8_c9d91492b367.slice - libcontainer container kubepods-burstable-pod7b39ba6d_0875_4d35_90a8_c9d91492b367.slice. Jan 20 00:34:13.450963 kubelet[2558]: I0120 00:34:13.450200 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fd51efe-cc95-4265-995a-08b13dbea3b1-goldmane-ca-bundle\") pod \"goldmane-666569f655-bx6wj\" (UID: \"4fd51efe-cc95-4265-995a-08b13dbea3b1\") " pod="calico-system/goldmane-666569f655-bx6wj" Jan 20 00:34:13.450963 kubelet[2558]: I0120 00:34:13.450241 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-ca-bundle\") pod \"whisker-6d544c66b-z7977\" (UID: \"3b5af3c4-8bd3-4827-8284-26abb85feced\") " pod="calico-system/whisker-6d544c66b-z7977" Jan 20 00:34:13.450963 kubelet[2558]: I0120 00:34:13.450268 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4fd51efe-cc95-4265-995a-08b13dbea3b1-goldmane-key-pair\") pod \"goldmane-666569f655-bx6wj\" (UID: \"4fd51efe-cc95-4265-995a-08b13dbea3b1\") " pod="calico-system/goldmane-666569f655-bx6wj" Jan 20 00:34:13.450963 kubelet[2558]: I0120 00:34:13.450329 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpm6\" (UniqueName: \"kubernetes.io/projected/5393b5c7-b838-40b7-b5c7-c21832b5d797-kube-api-access-pmpm6\") pod \"coredns-668d6bf9bc-g2srp\" (UID: \"5393b5c7-b838-40b7-b5c7-c21832b5d797\") " pod="kube-system/coredns-668d6bf9bc-g2srp" Jan 20 00:34:13.450963 kubelet[2558]: I0120 00:34:13.450365 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-backend-key-pair\") pod \"whisker-6d544c66b-z7977\" (UID: \"3b5af3c4-8bd3-4827-8284-26abb85feced\") " pod="calico-system/whisker-6d544c66b-z7977" Jan 20 00:34:13.451312 kubelet[2558]: I0120 00:34:13.450394 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd51efe-cc95-4265-995a-08b13dbea3b1-config\") pod \"goldmane-666569f655-bx6wj\" (UID: \"4fd51efe-cc95-4265-995a-08b13dbea3b1\") " pod="calico-system/goldmane-666569f655-bx6wj" Jan 20 00:34:13.451312 kubelet[2558]: I0120 00:34:13.450421 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2mb\" (UniqueName: \"kubernetes.io/projected/4fd51efe-cc95-4265-995a-08b13dbea3b1-kube-api-access-mt2mb\") pod \"goldmane-666569f655-bx6wj\" (UID: \"4fd51efe-cc95-4265-995a-08b13dbea3b1\") " pod="calico-system/goldmane-666569f655-bx6wj" Jan 20 00:34:13.451312 kubelet[2558]: I0120 00:34:13.450549 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rcc\" (UniqueName: \"kubernetes.io/projected/96693105-0319-44f2-a458-134dbd8dc9b8-kube-api-access-75rcc\") pod \"calico-apiserver-78c5dffbd-68fs6\" (UID: \"96693105-0319-44f2-a458-134dbd8dc9b8\") " pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" Jan 20 00:34:13.451312 kubelet[2558]: I0120 00:34:13.450584 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5393b5c7-b838-40b7-b5c7-c21832b5d797-config-volume\") pod \"coredns-668d6bf9bc-g2srp\" (UID: \"5393b5c7-b838-40b7-b5c7-c21832b5d797\") " pod="kube-system/coredns-668d6bf9bc-g2srp" Jan 20 00:34:13.451312 kubelet[2558]: I0120 00:34:13.450634 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtwq\" (UniqueName: \"kubernetes.io/projected/3b5af3c4-8bd3-4827-8284-26abb85feced-kube-api-access-fgtwq\") pod \"whisker-6d544c66b-z7977\" (UID: \"3b5af3c4-8bd3-4827-8284-26abb85feced\") " pod="calico-system/whisker-6d544c66b-z7977" Jan 20 00:34:13.451602 kubelet[2558]: I0120 00:34:13.450660 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c-tigera-ca-bundle\") pod \"calico-kube-controllers-654778bb87-lw5jd\" (UID: \"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c\") " pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" Jan 20 00:34:13.451602 kubelet[2558]: I0120 00:34:13.450706 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qvt\" (UniqueName: \"kubernetes.io/projected/a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c-kube-api-access-m9qvt\") pod \"calico-kube-controllers-654778bb87-lw5jd\" (UID: \"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c\") " pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" Jan 20 00:34:13.451602 kubelet[2558]: I0120 00:34:13.450738 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96693105-0319-44f2-a458-134dbd8dc9b8-calico-apiserver-certs\") pod \"calico-apiserver-78c5dffbd-68fs6\" (UID: \"96693105-0319-44f2-a458-134dbd8dc9b8\") " pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" Jan 20 00:34:13.607253 systemd[1]: Created slice kubepods-besteffort-pod2d7f8729_92e8_466b_ac93_b93fcaadeb7a.slice - libcontainer container kubepods-besteffort-pod2d7f8729_92e8_466b_ac93_b93fcaadeb7a.slice. Jan 20 00:34:13.610991 containerd[1468]: time="2026-01-20T00:34:13.610911579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkvnv,Uid:2d7f8729-92e8-466b-ac93-b93fcaadeb7a,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:13.640917 containerd[1468]: time="2026-01-20T00:34:13.640846164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d544c66b-z7977,Uid:3b5af3c4-8bd3-4827-8284-26abb85feced,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:13.652876 containerd[1468]: time="2026-01-20T00:34:13.652825695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654778bb87-lw5jd,Uid:a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:13.800129 containerd[1468]: time="2026-01-20T00:34:13.799819075Z" level=error msg="Failed to destroy network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.811173 containerd[1468]: time="2026-01-20T00:34:13.811049311Z" level=error msg="encountered an error cleaning up failed sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.811173 containerd[1468]: time="2026-01-20T00:34:13.811146401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkvnv,Uid:2d7f8729-92e8-466b-ac93-b93fcaadeb7a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.816056 containerd[1468]: time="2026-01-20T00:34:13.815879527Z" level=error msg="Failed to destroy network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.816189 containerd[1468]: time="2026-01-20T00:34:13.815881245Z" level=error msg="Failed to destroy network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.816517 containerd[1468]: time="2026-01-20T00:34:13.816390257Z" level=error msg="encountered an error cleaning up failed sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.816666 containerd[1468]: time="2026-01-20T00:34:13.816537210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d544c66b-z7977,Uid:3b5af3c4-8bd3-4827-8284-26abb85feced,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.816666 containerd[1468]: time="2026-01-20T00:34:13.816609570Z" level=error msg="encountered an error cleaning up failed sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.816666 containerd[1468]: time="2026-01-20T00:34:13.816644144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654778bb87-lw5jd,Uid:a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.819815 kubelet[2558]: E0120 00:34:13.819758 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.819925 kubelet[2558]: E0120 00:34:13.819847 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:13.819925 kubelet[2558]: E0120 00:34:13.819869 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkvnv" Jan 20 00:34:13.819985 kubelet[2558]: E0120 00:34:13.819758 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.820010 kubelet[2558]: E0120 00:34:13.819972 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:13.820108 kubelet[2558]: E0120 00:34:13.819795 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:13.820108 kubelet[2558]: E0120 00:34:13.820035 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d544c66b-z7977" Jan 20 00:34:13.820108 kubelet[2558]: E0120 00:34:13.820052 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d544c66b-z7977" Jan 20 00:34:13.820187 kubelet[2558]: E0120 00:34:13.819998 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" Jan 20 00:34:13.820187 kubelet[2558]: E0120 00:34:13.820165 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" Jan 20 00:34:13.820334 kubelet[2558]: E0120 00:34:13.820214 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-654778bb87-lw5jd_calico-system(a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-654778bb87-lw5jd_calico-system(a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:13.820334 kubelet[2558]: E0120 00:34:13.820079 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d544c66b-z7977_calico-system(3b5af3c4-8bd3-4827-8284-26abb85feced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d544c66b-z7977_calico-system(3b5af3c4-8bd3-4827-8284-26abb85feced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d544c66b-z7977" podUID="3b5af3c4-8bd3-4827-8284-26abb85feced" Jan 20 00:34:14.194166 kubelet[2558]: I0120 00:34:14.193688 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:14.199671 kubelet[2558]: I0120 00:34:14.199648 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:14.208905 containerd[1468]: time="2026-01-20T00:34:14.207846694Z" level=info msg="StopPodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\"" Jan 20 00:34:14.228779 containerd[1468]: time="2026-01-20T00:34:14.228674490Z" level=info msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" Jan 20 00:34:14.232031 kubelet[2558]: I0120 00:34:14.231295 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:14.238563 containerd[1468]: time="2026-01-20T00:34:14.238059709Z" level=info msg="StopPodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\"" Jan 20 00:34:14.245095 kubelet[2558]: E0120 00:34:14.243221 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:14.245250 containerd[1468]: time="2026-01-20T00:34:14.245208527Z" level=info msg="Ensure that sandbox 41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3 in task-service has been cleanup successfully" Jan 20 00:34:14.249102 containerd[1468]: time="2026-01-20T00:34:14.249009806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:34:14.252801 containerd[1468]: time="2026-01-20T00:34:14.252729933Z" level=info msg="Ensure that sandbox fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c in task-service has been cleanup successfully" Jan 20 00:34:14.255299 containerd[1468]: time="2026-01-20T00:34:14.255203868Z" level=info msg="Ensure that sandbox 317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b in task-service has been cleanup successfully" Jan 20 00:34:14.312229 containerd[1468]: time="2026-01-20T00:34:14.312179234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bx6wj,Uid:4fd51efe-cc95-4265-995a-08b13dbea3b1,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:14.350719 containerd[1468]: time="2026-01-20T00:34:14.350348854Z" level=error msg="StopPodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" failed" error="failed to destroy network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.351253 kubelet[2558]: E0120 00:34:14.351203 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:14.351394 kubelet[2558]: E0120 00:34:14.351286 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3"} Jan 20 00:34:14.351394 kubelet[2558]: E0120 00:34:14.351376 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:14.351646 kubelet[2558]: E0120 00:34:14.351410 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d7f8729-92e8-466b-ac93-b93fcaadeb7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:14.412070 containerd[1468]: time="2026-01-20T00:34:14.411804655Z" level=error msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" failed" error="failed to destroy network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.412663 kubelet[2558]: E0120 00:34:14.412539 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:14.412827 kubelet[2558]: E0120 00:34:14.412780 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c"} Jan 20 00:34:14.413043 kubelet[2558]: E0120 00:34:14.412841 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:14.413043 kubelet[2558]: E0120 00:34:14.412863 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:14.416590 containerd[1468]: time="2026-01-20T00:34:14.416375573Z" level=error msg="StopPodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" failed" error="failed to destroy network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.419091 kubelet[2558]: E0120 00:34:14.418884 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:14.419091 kubelet[2558]: E0120 00:34:14.419018 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b"} Jan 20 00:34:14.419212 kubelet[2558]: E0120 00:34:14.419135 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b5af3c4-8bd3-4827-8284-26abb85feced\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:14.419212 kubelet[2558]: E0120 00:34:14.419157 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b5af3c4-8bd3-4827-8284-26abb85feced\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d544c66b-z7977" podUID="3b5af3c4-8bd3-4827-8284-26abb85feced" Jan 20 00:34:14.451814 kubelet[2558]: E0120 00:34:14.451576 2558 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.453623 kubelet[2558]: E0120 00:34:14.452256 2558 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 00:34:14.453623 kubelet[2558]: E0120 00:34:14.452915 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84-calico-apiserver-certs podName:0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84 nodeName:}" failed. No retries permitted until 2026-01-20 00:34:14.952863321 +0000 UTC m=+43.768245165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84-calico-apiserver-certs") pod "calico-apiserver-78c5dffbd-t9x7r" (UID: "0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84") : failed to sync secret cache: timed out waiting for the condition Jan 20 00:34:14.453908 kubelet[2558]: E0120 00:34:14.453640 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b39ba6d-0875-4d35-90a8-c9d91492b367-config-volume podName:7b39ba6d-0875-4d35-90a8-c9d91492b367 nodeName:}" failed. No retries permitted until 2026-01-20 00:34:14.953620273 +0000 UTC m=+43.769002098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7b39ba6d-0875-4d35-90a8-c9d91492b367-config-volume") pod "coredns-668d6bf9bc-7zhgb" (UID: "7b39ba6d-0875-4d35-90a8-c9d91492b367") : failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.466893 kubelet[2558]: E0120 00:34:14.464391 2558 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.469710 kubelet[2558]: E0120 00:34:14.468369 2558 projected.go:194] Error preparing data for projected volume kube-api-access-pffxd for pod calico-apiserver/calico-apiserver-78c5dffbd-t9x7r: failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.469785 kubelet[2558]: E0120 00:34:14.469762 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84-kube-api-access-pffxd podName:0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84 nodeName:}" failed. No retries permitted until 2026-01-20 00:34:14.969742586 +0000 UTC m=+43.785124410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pffxd" (UniqueName: "kubernetes.io/projected/0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84-kube-api-access-pffxd") pod "calico-apiserver-78c5dffbd-t9x7r" (UID: "0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84") : failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.479109 containerd[1468]: time="2026-01-20T00:34:14.478427844Z" level=error msg="Failed to destroy network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.482040 containerd[1468]: time="2026-01-20T00:34:14.481947030Z" level=error msg="encountered an error cleaning up failed sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.482126 containerd[1468]: time="2026-01-20T00:34:14.482101186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bx6wj,Uid:4fd51efe-cc95-4265-995a-08b13dbea3b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.482304 kubelet[2558]: E0120 00:34:14.482278 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:14.482341 kubelet[2558]: E0120 00:34:14.482321 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bx6wj" Jan 20 00:34:14.482430 kubelet[2558]: E0120 00:34:14.482393 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bx6wj" Jan 20 00:34:14.482594 kubelet[2558]: E0120 00:34:14.482530 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bx6wj_calico-system(4fd51efe-cc95-4265-995a-08b13dbea3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bx6wj_calico-system(4fd51efe-cc95-4265-995a-08b13dbea3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:14.482814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e-shm.mount: Deactivated successfully. Jan 20 00:34:14.552839 kubelet[2558]: E0120 00:34:14.552703 2558 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.552839 kubelet[2558]: E0120 00:34:14.552826 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5393b5c7-b838-40b7-b5c7-c21832b5d797-config-volume podName:5393b5c7-b838-40b7-b5c7-c21832b5d797 nodeName:}" failed. No retries permitted until 2026-01-20 00:34:15.052809249 +0000 UTC m=+43.868191074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5393b5c7-b838-40b7-b5c7-c21832b5d797-config-volume") pod "coredns-668d6bf9bc-g2srp" (UID: "5393b5c7-b838-40b7-b5c7-c21832b5d797") : failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.552839 kubelet[2558]: E0120 00:34:14.552739 2558 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 00:34:14.553293 kubelet[2558]: E0120 00:34:14.552865 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96693105-0319-44f2-a458-134dbd8dc9b8-calico-apiserver-certs podName:96693105-0319-44f2-a458-134dbd8dc9b8 nodeName:}" failed. No retries permitted until 2026-01-20 00:34:15.052858702 +0000 UTC m=+43.868240526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/96693105-0319-44f2-a458-134dbd8dc9b8-calico-apiserver-certs") pod "calico-apiserver-78c5dffbd-68fs6" (UID: "96693105-0319-44f2-a458-134dbd8dc9b8") : failed to sync secret cache: timed out waiting for the condition Jan 20 00:34:14.576772 kubelet[2558]: E0120 00:34:14.576697 2558 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.576772 kubelet[2558]: E0120 00:34:14.576770 2558 projected.go:194] Error preparing data for projected volume kube-api-access-75rcc for pod calico-apiserver/calico-apiserver-78c5dffbd-68fs6: failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:14.576902 kubelet[2558]: E0120 00:34:14.576829 2558 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96693105-0319-44f2-a458-134dbd8dc9b8-kube-api-access-75rcc podName:96693105-0319-44f2-a458-134dbd8dc9b8 nodeName:}" failed. No retries permitted until 2026-01-20 00:34:15.076808461 +0000 UTC m=+43.892190285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-75rcc" (UniqueName: "kubernetes.io/projected/96693105-0319-44f2-a458-134dbd8dc9b8-kube-api-access-75rcc") pod "calico-apiserver-78c5dffbd-68fs6" (UID: "96693105-0319-44f2-a458-134dbd8dc9b8") : failed to sync configmap cache: timed out waiting for the condition Jan 20 00:34:15.164962 containerd[1468]: time="2026-01-20T00:34:15.164833693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-68fs6,Uid:96693105-0319-44f2-a458-134dbd8dc9b8,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:34:15.180347 kubelet[2558]: E0120 00:34:15.179810 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:15.180716 containerd[1468]: time="2026-01-20T00:34:15.180666158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2srp,Uid:5393b5c7-b838-40b7-b5c7-c21832b5d797,Namespace:kube-system,Attempt:0,}" Jan 20 00:34:15.187840 containerd[1468]: time="2026-01-20T00:34:15.187217294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-t9x7r,Uid:0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:34:15.205192 kubelet[2558]: E0120 00:34:15.205084 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:15.207069 containerd[1468]: time="2026-01-20T00:34:15.206946662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zhgb,Uid:7b39ba6d-0875-4d35-90a8-c9d91492b367,Namespace:kube-system,Attempt:0,}" Jan 20 00:34:15.251653 kubelet[2558]: I0120 00:34:15.251561 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:15.256025 containerd[1468]: time="2026-01-20T00:34:15.253877000Z" level=info msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" Jan 20 00:34:15.256025 containerd[1468]: time="2026-01-20T00:34:15.254053769Z" level=info msg="Ensure that sandbox a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e in task-service has been cleanup successfully" Jan 20 00:34:15.310869 containerd[1468]: time="2026-01-20T00:34:15.310793303Z" level=error msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" failed" error="failed to destroy network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.313991 kubelet[2558]: E0120 00:34:15.312760 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:15.313991 kubelet[2558]: E0120 00:34:15.313005 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e"} Jan 20 00:34:15.313991 kubelet[2558]: E0120 00:34:15.313086 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fd51efe-cc95-4265-995a-08b13dbea3b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:15.313991 kubelet[2558]: E0120 00:34:15.313111 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fd51efe-cc95-4265-995a-08b13dbea3b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:15.322593 containerd[1468]: time="2026-01-20T00:34:15.322560418Z" level=error msg="Failed to destroy network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.323558 containerd[1468]: time="2026-01-20T00:34:15.323357760Z" level=error msg="encountered an error cleaning up failed sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.323558 containerd[1468]: time="2026-01-20T00:34:15.323531804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-68fs6,Uid:96693105-0319-44f2-a458-134dbd8dc9b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.325595 kubelet[2558]: E0120 00:34:15.325436 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.325595 kubelet[2558]: E0120 00:34:15.325582 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" Jan 20 00:34:15.325676 kubelet[2558]: E0120 00:34:15.325601 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" Jan 20 00:34:15.325870 kubelet[2558]: E0120 00:34:15.325725 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c5dffbd-68fs6_calico-apiserver(96693105-0319-44f2-a458-134dbd8dc9b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c5dffbd-68fs6_calico-apiserver(96693105-0319-44f2-a458-134dbd8dc9b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:15.328076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e-shm.mount: Deactivated successfully. Jan 20 00:34:15.361901 containerd[1468]: time="2026-01-20T00:34:15.361786874Z" level=error msg="Failed to destroy network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.362921 containerd[1468]: time="2026-01-20T00:34:15.362783327Z" level=error msg="encountered an error cleaning up failed sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.362921 containerd[1468]: time="2026-01-20T00:34:15.362903190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2srp,Uid:5393b5c7-b838-40b7-b5c7-c21832b5d797,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.365581 kubelet[2558]: E0120 00:34:15.363219 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.365581 kubelet[2558]: E0120 00:34:15.363665 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g2srp" Jan 20 00:34:15.365581 kubelet[2558]: E0120 00:34:15.363700 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g2srp" Jan 20 00:34:15.365144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae-shm.mount: Deactivated successfully. Jan 20 00:34:15.366142 kubelet[2558]: E0120 00:34:15.365013 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g2srp_kube-system(5393b5c7-b838-40b7-b5c7-c21832b5d797)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g2srp_kube-system(5393b5c7-b838-40b7-b5c7-c21832b5d797)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g2srp" podUID="5393b5c7-b838-40b7-b5c7-c21832b5d797" Jan 20 00:34:15.380951 containerd[1468]: time="2026-01-20T00:34:15.379950600Z" level=error msg="Failed to destroy network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.380951 containerd[1468]: time="2026-01-20T00:34:15.380641025Z" level=error msg="encountered an error cleaning up failed sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.380951 containerd[1468]: time="2026-01-20T00:34:15.380747483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-t9x7r,Uid:0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.382016 kubelet[2558]: E0120 00:34:15.381096 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.382016 kubelet[2558]: E0120 00:34:15.381171 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" Jan 20 00:34:15.382016 kubelet[2558]: E0120 00:34:15.381199 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" Jan 20 00:34:15.382110 kubelet[2558]: E0120 00:34:15.381290 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c5dffbd-t9x7r_calico-apiserver(0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c5dffbd-t9x7r_calico-apiserver(0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:15.393960 containerd[1468]: time="2026-01-20T00:34:15.393867797Z" level=error msg="Failed to destroy network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.394740 containerd[1468]: time="2026-01-20T00:34:15.394689356Z" level=error msg="encountered an error cleaning up failed sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.394870 containerd[1468]: time="2026-01-20T00:34:15.394750940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zhgb,Uid:7b39ba6d-0875-4d35-90a8-c9d91492b367,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.395724 kubelet[2558]: E0120 00:34:15.395213 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:15.395724 kubelet[2558]: E0120 00:34:15.395318 2558 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7zhgb" Jan 20 00:34:15.395724 kubelet[2558]: E0120 00:34:15.395353 2558 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7zhgb" Jan 20 00:34:15.395961 kubelet[2558]: E0120 00:34:15.395394 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7zhgb_kube-system(7b39ba6d-0875-4d35-90a8-c9d91492b367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7zhgb_kube-system(7b39ba6d-0875-4d35-90a8-c9d91492b367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7zhgb" podUID="7b39ba6d-0875-4d35-90a8-c9d91492b367" Jan 20 00:34:16.235146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7-shm.mount: Deactivated successfully. Jan 20 00:34:16.239689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932-shm.mount: Deactivated successfully. Jan 20 00:34:16.267761 kubelet[2558]: I0120 00:34:16.264181 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:16.267761 kubelet[2558]: I0120 00:34:16.266265 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:16.268264 containerd[1468]: time="2026-01-20T00:34:16.267300758Z" level=info msg="StopPodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\"" Jan 20 00:34:16.268264 containerd[1468]: time="2026-01-20T00:34:16.267692257Z" level=info msg="StopPodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\"" Jan 20 00:34:16.268264 containerd[1468]: time="2026-01-20T00:34:16.268124863Z" level=info msg="Ensure that sandbox 2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932 in task-service has been cleanup successfully" Jan 20 00:34:16.279425 containerd[1468]: time="2026-01-20T00:34:16.277677793Z" level=info msg="Ensure that sandbox f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e in task-service has been cleanup successfully" Jan 20 00:34:16.290859 kubelet[2558]: I0120 00:34:16.290831 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:34:16.292036 containerd[1468]: time="2026-01-20T00:34:16.292008233Z" level=info msg="StopPodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\"" Jan 20 00:34:16.299706 containerd[1468]: time="2026-01-20T00:34:16.293928105Z" level=info msg="Ensure that sandbox 79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7 in task-service has been cleanup successfully" Jan 20 00:34:16.301107 kubelet[2558]: I0120 00:34:16.301088 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:16.302107 containerd[1468]: time="2026-01-20T00:34:16.302080922Z" level=info msg="StopPodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\"" Jan 20 00:34:16.303211 containerd[1468]: time="2026-01-20T00:34:16.303051727Z" level=info msg="Ensure that sandbox 70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae in task-service has been cleanup successfully" Jan 20 00:34:16.476864 containerd[1468]: time="2026-01-20T00:34:16.476679531Z" level=error msg="StopPodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" failed" error="failed to destroy network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:16.480097 kubelet[2558]: E0120 00:34:16.477024 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:16.480097 kubelet[2558]: E0120 00:34:16.477204 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae"} Jan 20 00:34:16.480097 kubelet[2558]: E0120 00:34:16.477260 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5393b5c7-b838-40b7-b5c7-c21832b5d797\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:16.480097 kubelet[2558]: E0120 00:34:16.477303 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5393b5c7-b838-40b7-b5c7-c21832b5d797\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g2srp" podUID="5393b5c7-b838-40b7-b5c7-c21832b5d797" Jan 20 00:34:16.484231 containerd[1468]: time="2026-01-20T00:34:16.484120029Z" level=error msg="StopPodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" failed" error="failed to destroy network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:16.485045 kubelet[2558]: E0120 00:34:16.484612 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:16.485045 kubelet[2558]: E0120 00:34:16.484708 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e"} Jan 20 00:34:16.485045 kubelet[2558]: E0120 00:34:16.484753 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96693105-0319-44f2-a458-134dbd8dc9b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:16.485045 kubelet[2558]: E0120 00:34:16.484783 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96693105-0319-44f2-a458-134dbd8dc9b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:16.504350 containerd[1468]: time="2026-01-20T00:34:16.502671927Z" level=error msg="StopPodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" failed" error="failed to destroy network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:16.505788 kubelet[2558]: E0120 00:34:16.505425 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:34:16.505788 kubelet[2558]: E0120 00:34:16.505655 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7"} Jan 20 00:34:16.505936 kubelet[2558]: E0120 00:34:16.505781 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b39ba6d-0875-4d35-90a8-c9d91492b367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:16.505936 kubelet[2558]: E0120 00:34:16.505831 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b39ba6d-0875-4d35-90a8-c9d91492b367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7zhgb" podUID="7b39ba6d-0875-4d35-90a8-c9d91492b367" Jan 20 00:34:16.506232 containerd[1468]: time="2026-01-20T00:34:16.506011939Z" level=error msg="StopPodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" failed" error="failed to destroy network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:16.506584 kubelet[2558]: E0120 00:34:16.506306 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:16.506655 kubelet[2558]: E0120 00:34:16.506573 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932"} Jan 20 00:34:16.506655 kubelet[2558]: E0120 00:34:16.506624 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:16.506843 kubelet[2558]: E0120 00:34:16.506657 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:24.706093 containerd[1468]: time="2026-01-20T00:34:24.704399153Z" level=info msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" Jan 20 00:34:24.798571 containerd[1468]: time="2026-01-20T00:34:24.798231110Z" level=error msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" failed" error="failed to destroy network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:24.798885 kubelet[2558]: E0120 00:34:24.798768 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:24.799377 kubelet[2558]: E0120 00:34:24.798901 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c"} Jan 20 00:34:24.799377 kubelet[2558]: E0120 00:34:24.798957 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:24.799377 kubelet[2558]: E0120 00:34:24.798990 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:26.141132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203143546.mount: Deactivated successfully. Jan 20 00:34:26.221620 containerd[1468]: time="2026-01-20T00:34:26.191001419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 00:34:26.221620 containerd[1468]: time="2026-01-20T00:34:26.219906806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:26.232275 containerd[1468]: time="2026-01-20T00:34:26.232125895Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:26.237116 containerd[1468]: time="2026-01-20T00:34:26.236848114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:26.237805 containerd[1468]: time="2026-01-20T00:34:26.237709576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.98860853s" Jan 20 00:34:26.237805 containerd[1468]: time="2026-01-20T00:34:26.237785607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 00:34:26.262538 containerd[1468]: time="2026-01-20T00:34:26.262344568Z" level=info msg="CreateContainer within sandbox \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:34:26.294183 containerd[1468]: time="2026-01-20T00:34:26.294079876Z" level=info msg="CreateContainer within sandbox \"c3d25afb1ebf292bf51cd101e9877f6a9a414672c433aa078b5520079761300c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2180035cbd8911258455d8131c86afea2ef7c248d513074d9c0d987ba5264746\"" Jan 20 00:34:26.295236 containerd[1468]: time="2026-01-20T00:34:26.295109138Z" level=info msg="StartContainer for \"2180035cbd8911258455d8131c86afea2ef7c248d513074d9c0d987ba5264746\"" Jan 20 00:34:26.399744 systemd[1]: Started cri-containerd-2180035cbd8911258455d8131c86afea2ef7c248d513074d9c0d987ba5264746.scope - libcontainer container 2180035cbd8911258455d8131c86afea2ef7c248d513074d9c0d987ba5264746. Jan 20 00:34:26.575043 containerd[1468]: time="2026-01-20T00:34:26.568069876Z" level=info msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" Jan 20 00:34:26.602972 containerd[1468]: time="2026-01-20T00:34:26.602845575Z" level=info msg="StartContainer for \"2180035cbd8911258455d8131c86afea2ef7c248d513074d9c0d987ba5264746\" returns successfully" Jan 20 00:34:26.635226 containerd[1468]: time="2026-01-20T00:34:26.635051976Z" level=error msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" failed" error="failed to destroy network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:34:26.635573 kubelet[2558]: E0120 00:34:26.635362 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:26.635573 kubelet[2558]: E0120 00:34:26.635412 2558 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e"} Jan 20 00:34:26.635573 kubelet[2558]: E0120 00:34:26.635543 2558 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fd51efe-cc95-4265-995a-08b13dbea3b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:34:26.635573 kubelet[2558]: E0120 00:34:26.635566 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fd51efe-cc95-4265-995a-08b13dbea3b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:26.731712 kubelet[2558]: E0120 00:34:26.729802 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:26.801150 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:34:26.801326 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:34:26.980066 kubelet[2558]: I0120 00:34:26.979897 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jkl2d" podStartSLOduration=1.335437856 podStartE2EDuration="21.979836984s" podCreationTimestamp="2026-01-20 00:34:05 +0000 UTC" firstStartedPulling="2026-01-20 00:34:05.596074269 +0000 UTC m=+34.411456093" lastFinishedPulling="2026-01-20 00:34:26.240473397 +0000 UTC m=+55.055855221" observedRunningTime="2026-01-20 00:34:26.761319777 +0000 UTC m=+55.576701611" watchObservedRunningTime="2026-01-20 00:34:26.979836984 +0000 UTC m=+55.795218809" Jan 20 00:34:26.983321 containerd[1468]: time="2026-01-20T00:34:26.982086618Z" level=info msg="StopPodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\"" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.128 [INFO][3958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.132 [INFO][3958] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" iface="eth0" netns="/var/run/netns/cni-7cfe84ef-d420-2431-31c9-5c9698787fc9" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.134 [INFO][3958] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" iface="eth0" netns="/var/run/netns/cni-7cfe84ef-d420-2431-31c9-5c9698787fc9" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.135 [INFO][3958] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" iface="eth0" netns="/var/run/netns/cni-7cfe84ef-d420-2431-31c9-5c9698787fc9" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.135 [INFO][3958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.135 [INFO][3958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.324 [INFO][3974] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.325 [INFO][3974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.326 [INFO][3974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.336 [WARNING][3974] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.336 [INFO][3974] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.343 [INFO][3974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:27.352808 containerd[1468]: 2026-01-20 00:34:27.349 [INFO][3958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:27.358805 containerd[1468]: time="2026-01-20T00:34:27.355950655Z" level=info msg="TearDown network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" successfully" Jan 20 00:34:27.358805 containerd[1468]: time="2026-01-20T00:34:27.355991110Z" level=info msg="StopPodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" returns successfully" Jan 20 00:34:27.357677 systemd[1]: run-netns-cni\x2d7cfe84ef\x2dd420\x2d2431\x2d31c9\x2d5c9698787fc9.mount: Deactivated successfully. Jan 20 00:34:27.432093 kubelet[2558]: I0120 00:34:27.431976 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgtwq\" (UniqueName: \"kubernetes.io/projected/3b5af3c4-8bd3-4827-8284-26abb85feced-kube-api-access-fgtwq\") pod \"3b5af3c4-8bd3-4827-8284-26abb85feced\" (UID: \"3b5af3c4-8bd3-4827-8284-26abb85feced\") " Jan 20 00:34:27.432093 kubelet[2558]: I0120 00:34:27.432072 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-ca-bundle\") pod \"3b5af3c4-8bd3-4827-8284-26abb85feced\" (UID: \"3b5af3c4-8bd3-4827-8284-26abb85feced\") " Jan 20 00:34:27.432362 kubelet[2558]: I0120 00:34:27.432113 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-backend-key-pair\") pod \"3b5af3c4-8bd3-4827-8284-26abb85feced\" (UID: \"3b5af3c4-8bd3-4827-8284-26abb85feced\") " Jan 20 00:34:27.432885 kubelet[2558]: I0120 00:34:27.432834 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3b5af3c4-8bd3-4827-8284-26abb85feced" (UID: "3b5af3c4-8bd3-4827-8284-26abb85feced"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:34:27.437644 kubelet[2558]: I0120 00:34:27.437619 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5af3c4-8bd3-4827-8284-26abb85feced-kube-api-access-fgtwq" (OuterVolumeSpecName: "kube-api-access-fgtwq") pod "3b5af3c4-8bd3-4827-8284-26abb85feced" (UID: "3b5af3c4-8bd3-4827-8284-26abb85feced"). InnerVolumeSpecName "kube-api-access-fgtwq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:34:27.437831 kubelet[2558]: I0120 00:34:27.437777 2558 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3b5af3c4-8bd3-4827-8284-26abb85feced" (UID: "3b5af3c4-8bd3-4827-8284-26abb85feced"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:34:27.439387 systemd[1]: var-lib-kubelet-pods-3b5af3c4\x2d8bd3\x2d4827\x2d8284\x2d26abb85feced-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfgtwq.mount: Deactivated successfully. Jan 20 00:34:27.439661 systemd[1]: var-lib-kubelet-pods-3b5af3c4\x2d8bd3\x2d4827\x2d8284\x2d26abb85feced-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 00:34:27.533346 kubelet[2558]: I0120 00:34:27.533205 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 00:34:27.533346 kubelet[2558]: I0120 00:34:27.533288 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b5af3c4-8bd3-4827-8284-26abb85feced-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 00:34:27.533346 kubelet[2558]: I0120 00:34:27.533306 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fgtwq\" (UniqueName: \"kubernetes.io/projected/3b5af3c4-8bd3-4827-8284-26abb85feced-kube-api-access-fgtwq\") on node \"localhost\" DevicePath \"\"" Jan 20 00:34:27.571477 containerd[1468]: time="2026-01-20T00:34:27.571373874Z" level=info msg="StopPodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\"" Jan 20 00:34:27.581991 systemd[1]: Removed slice kubepods-besteffort-pod3b5af3c4_8bd3_4827_8284_26abb85feced.slice - libcontainer container kubepods-besteffort-pod3b5af3c4_8bd3_4827_8284_26abb85feced.slice. Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.636 [INFO][4006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.636 [INFO][4006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" iface="eth0" netns="/var/run/netns/cni-11225b98-a5e9-578a-3d2c-e02781d4d7cc" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.637 [INFO][4006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" iface="eth0" netns="/var/run/netns/cni-11225b98-a5e9-578a-3d2c-e02781d4d7cc" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.638 [INFO][4006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" iface="eth0" netns="/var/run/netns/cni-11225b98-a5e9-578a-3d2c-e02781d4d7cc" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.638 [INFO][4006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.638 [INFO][4006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.679 [INFO][4015] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.679 [INFO][4015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.679 [INFO][4015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.689 [WARNING][4015] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.689 [INFO][4015] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.692 [INFO][4015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:27.699640 containerd[1468]: 2026-01-20 00:34:27.695 [INFO][4006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:27.700300 containerd[1468]: time="2026-01-20T00:34:27.700145577Z" level=info msg="TearDown network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" successfully" Jan 20 00:34:27.700300 containerd[1468]: time="2026-01-20T00:34:27.700189169Z" level=info msg="StopPodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" returns successfully" Jan 20 00:34:27.704047 containerd[1468]: time="2026-01-20T00:34:27.702237492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-t9x7r,Uid:0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:34:27.705013 systemd[1]: run-netns-cni\x2d11225b98\x2da5e9\x2d578a\x2d3d2c\x2de02781d4d7cc.mount: Deactivated successfully. Jan 20 00:34:27.733355 kubelet[2558]: E0120 00:34:27.731879 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:27.879220 systemd[1]: Created slice kubepods-besteffort-pod1e558f7e_555f_414d_86be_1ebe08b27e55.slice - libcontainer container kubepods-besteffort-pod1e558f7e_555f_414d_86be_1ebe08b27e55.slice. Jan 20 00:34:27.937054 kubelet[2558]: I0120 00:34:27.936715 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vzk\" (UniqueName: \"kubernetes.io/projected/1e558f7e-555f-414d-86be-1ebe08b27e55-kube-api-access-64vzk\") pod \"whisker-77dcbc58d8-fnbv4\" (UID: \"1e558f7e-555f-414d-86be-1ebe08b27e55\") " pod="calico-system/whisker-77dcbc58d8-fnbv4" Jan 20 00:34:27.937054 kubelet[2558]: I0120 00:34:27.936792 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e558f7e-555f-414d-86be-1ebe08b27e55-whisker-backend-key-pair\") pod \"whisker-77dcbc58d8-fnbv4\" (UID: \"1e558f7e-555f-414d-86be-1ebe08b27e55\") " pod="calico-system/whisker-77dcbc58d8-fnbv4" Jan 20 00:34:27.937054 kubelet[2558]: I0120 00:34:27.936844 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e558f7e-555f-414d-86be-1ebe08b27e55-whisker-ca-bundle\") pod \"whisker-77dcbc58d8-fnbv4\" (UID: \"1e558f7e-555f-414d-86be-1ebe08b27e55\") " pod="calico-system/whisker-77dcbc58d8-fnbv4" Jan 20 00:34:28.048825 systemd-networkd[1395]: calib6e0022eead: Link UP Jan 20 00:34:28.049127 systemd-networkd[1395]: calib6e0022eead: Gained carrier Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.821 [INFO][4029] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.846 [INFO][4029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0 calico-apiserver-78c5dffbd- calico-apiserver 0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84 1002 0 2026-01-20 00:33:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78c5dffbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78c5dffbd-t9x7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6e0022eead [] [] }} ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.847 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.949 [INFO][4057] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" HandleID="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.949 [INFO][4057] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" HandleID="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba0c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78c5dffbd-t9x7r", "timestamp":"2026-01-20 00:34:27.949212888 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.949 [INFO][4057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.949 [INFO][4057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.949 [INFO][4057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.973 [INFO][4057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.988 [INFO][4057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:27.997 [INFO][4057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.005 [INFO][4057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.009 [INFO][4057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.009 [INFO][4057] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.012 [INFO][4057] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388 Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.019 [INFO][4057] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.028 [INFO][4057] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.028 [INFO][4057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" host="localhost" Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.028 [INFO][4057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:28.073818 containerd[1468]: 2026-01-20 00:34:28.028 [INFO][4057] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" HandleID="k8s-pod-network.9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.074790 containerd[1468]: 2026-01-20 00:34:28.033 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78c5dffbd-t9x7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6e0022eead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:28.074790 containerd[1468]: 2026-01-20 00:34:28.033 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.074790 containerd[1468]: 2026-01-20 00:34:28.033 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6e0022eead ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.074790 containerd[1468]: 2026-01-20 00:34:28.049 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.074790 containerd[1468]: 2026-01-20 00:34:28.050 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388", Pod:"calico-apiserver-78c5dffbd-t9x7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6e0022eead", MAC:"6e:da:89:a3:46:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:28.074790 containerd[1468]: 2026-01-20 00:34:28.069 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-t9x7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:28.118829 containerd[1468]: time="2026-01-20T00:34:28.118315515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:28.118829 containerd[1468]: time="2026-01-20T00:34:28.118390004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:28.118829 containerd[1468]: time="2026-01-20T00:34:28.118537799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:28.122600 containerd[1468]: time="2026-01-20T00:34:28.119948185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:28.172058 systemd[1]: Started cri-containerd-9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388.scope - libcontainer container 9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388. Jan 20 00:34:28.187551 containerd[1468]: time="2026-01-20T00:34:28.187017274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77dcbc58d8-fnbv4,Uid:1e558f7e-555f-414d-86be-1ebe08b27e55,Namespace:calico-system,Attempt:0,}" Jan 20 00:34:28.191920 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:28.242200 containerd[1468]: time="2026-01-20T00:34:28.242069232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-t9x7r,Uid:0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388\"" Jan 20 00:34:28.244963 containerd[1468]: time="2026-01-20T00:34:28.244911099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:34:28.327756 containerd[1468]: time="2026-01-20T00:34:28.325937766Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:28.345767 containerd[1468]: time="2026-01-20T00:34:28.333941359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:34:28.345943 containerd[1468]: time="2026-01-20T00:34:28.334386198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:34:28.346265 kubelet[2558]: E0120 00:34:28.346185 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:28.346335 kubelet[2558]: E0120 00:34:28.346295 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:28.349956 kubelet[2558]: E0120 00:34:28.349563 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pffxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-t9x7r_calico-apiserver(0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:28.352387 kubelet[2558]: E0120 00:34:28.352055 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:28.445281 systemd-networkd[1395]: cali5da9185c3d7: Link UP Jan 20 00:34:28.447576 systemd-networkd[1395]: cali5da9185c3d7: Gained carrier Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.249 [INFO][4112] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.293 [INFO][4112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0 whisker-77dcbc58d8- calico-system 1e558f7e-555f-414d-86be-1ebe08b27e55 1017 0 2026-01-20 00:34:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77dcbc58d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77dcbc58d8-fnbv4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5da9185c3d7 [] [] }} ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.293 [INFO][4112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.358 [INFO][4132] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" HandleID="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Workload="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.358 [INFO][4132] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" HandleID="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Workload="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118df0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77dcbc58d8-fnbv4", "timestamp":"2026-01-20 00:34:28.35818795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.358 [INFO][4132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.358 [INFO][4132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.358 [INFO][4132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.375 [INFO][4132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.383 [INFO][4132] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.393 [INFO][4132] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.396 [INFO][4132] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.400 [INFO][4132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.400 [INFO][4132] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.405 [INFO][4132] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99 Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.413 [INFO][4132] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.425 [INFO][4132] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.425 [INFO][4132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" host="localhost" Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.425 [INFO][4132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:28.502321 containerd[1468]: 2026-01-20 00:34:28.425 [INFO][4132] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" HandleID="k8s-pod-network.e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Workload="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.503366 containerd[1468]: 2026-01-20 00:34:28.431 [INFO][4112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0", GenerateName:"whisker-77dcbc58d8-", Namespace:"calico-system", SelfLink:"", UID:"1e558f7e-555f-414d-86be-1ebe08b27e55", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77dcbc58d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77dcbc58d8-fnbv4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5da9185c3d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:28.503366 containerd[1468]: 2026-01-20 00:34:28.431 [INFO][4112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.503366 containerd[1468]: 2026-01-20 00:34:28.431 [INFO][4112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5da9185c3d7 ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.503366 containerd[1468]: 2026-01-20 00:34:28.449 [INFO][4112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.503366 containerd[1468]: 2026-01-20 00:34:28.453 [INFO][4112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0", GenerateName:"whisker-77dcbc58d8-", Namespace:"calico-system", SelfLink:"", UID:"1e558f7e-555f-414d-86be-1ebe08b27e55", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77dcbc58d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99", Pod:"whisker-77dcbc58d8-fnbv4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5da9185c3d7", MAC:"fa:b8:92:43:96:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:28.503366 containerd[1468]: 2026-01-20 00:34:28.490 [INFO][4112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99" Namespace="calico-system" Pod="whisker-77dcbc58d8-fnbv4" WorkloadEndpoint="localhost-k8s-whisker--77dcbc58d8--fnbv4-eth0" Jan 20 00:34:28.601477 containerd[1468]: time="2026-01-20T00:34:28.596794645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:28.601477 containerd[1468]: time="2026-01-20T00:34:28.597189861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:28.601477 containerd[1468]: time="2026-01-20T00:34:28.597752399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:28.601477 containerd[1468]: time="2026-01-20T00:34:28.598745076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:28.659239 systemd[1]: Started cri-containerd-e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99.scope - libcontainer container e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99. Jan 20 00:34:28.701028 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:28.745973 kubelet[2558]: E0120 00:34:28.745621 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:28.884269 containerd[1468]: time="2026-01-20T00:34:28.883357461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77dcbc58d8-fnbv4,Uid:1e558f7e-555f-414d-86be-1ebe08b27e55,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0fda7db9bb83bbd9d5c7ba32c4b810d974cb8c001310211eea19f07f609dd99\"" Jan 20 00:34:28.890309 containerd[1468]: time="2026-01-20T00:34:28.890194793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:34:28.972240 containerd[1468]: time="2026-01-20T00:34:28.972014612Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:28.975882 containerd[1468]: time="2026-01-20T00:34:28.975745741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:34:28.975882 containerd[1468]: time="2026-01-20T00:34:28.975799876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:34:28.976542 kubelet[2558]: E0120 00:34:28.976130 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:34:28.976542 kubelet[2558]: E0120 00:34:28.976190 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:34:28.977064 kubelet[2558]: E0120 00:34:28.976948 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aa4fe703c3ce4a2fad7a06c0824f3068,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:28.981064 containerd[1468]: time="2026-01-20T00:34:28.980976724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:34:29.037660 kernel: bpftool[4300]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 00:34:29.046411 containerd[1468]: time="2026-01-20T00:34:29.045225094Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:29.047939 containerd[1468]: time="2026-01-20T00:34:29.047860051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:34:29.048019 containerd[1468]: time="2026-01-20T00:34:29.047958385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:34:29.049262 kubelet[2558]: E0120 00:34:29.048401 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:34:29.049262 kubelet[2558]: E0120 00:34:29.048599 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:34:29.049262 kubelet[2558]: E0120 00:34:29.048755 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:29.051060 kubelet[2558]: E0120 00:34:29.050951 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:34:29.410007 systemd-networkd[1395]: vxlan.calico: Link UP Jan 20 00:34:29.410021 systemd-networkd[1395]: vxlan.calico: Gained carrier Jan 20 00:34:29.435239 systemd-networkd[1395]: calib6e0022eead: Gained IPv6LL Jan 20 00:34:29.578263 containerd[1468]: time="2026-01-20T00:34:29.577847363Z" level=info msg="StopPodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\"" Jan 20 00:34:29.582071 containerd[1468]: time="2026-01-20T00:34:29.578602709Z" level=info msg="StopPodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\"" Jan 20 00:34:29.582071 containerd[1468]: time="2026-01-20T00:34:29.578989989Z" level=info msg="StopPodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\"" Jan 20 00:34:29.588787 kubelet[2558]: I0120 00:34:29.586731 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5af3c4-8bd3-4827-8284-26abb85feced" path="/var/lib/kubelet/pods/3b5af3c4-8bd3-4827-8284-26abb85feced/volumes" Jan 20 00:34:29.691285 systemd-networkd[1395]: cali5da9185c3d7: Gained IPv6LL Jan 20 00:34:29.781997 kubelet[2558]: E0120 00:34:29.781378 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:29.783122 kubelet[2558]: E0120 00:34:29.782966 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.753 [INFO][4404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.753 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" iface="eth0" netns="/var/run/netns/cni-97b98fbe-ce6e-b0bf-b191-ff0b972168d2" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.754 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" iface="eth0" netns="/var/run/netns/cni-97b98fbe-ce6e-b0bf-b191-ff0b972168d2" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.756 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" iface="eth0" netns="/var/run/netns/cni-97b98fbe-ce6e-b0bf-b191-ff0b972168d2" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.756 [INFO][4404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.756 [INFO][4404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.842 [INFO][4426] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.842 [INFO][4426] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.842 [INFO][4426] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.859 [WARNING][4426] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.859 [INFO][4426] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.885 [INFO][4426] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:29.902225 containerd[1468]: 2026-01-20 00:34:29.888 [INFO][4404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:29.907205 containerd[1468]: time="2026-01-20T00:34:29.906848969Z" level=info msg="TearDown network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" successfully" Jan 20 00:34:29.907205 containerd[1468]: time="2026-01-20T00:34:29.906977638Z" level=info msg="StopPodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" returns successfully" Jan 20 00:34:29.909592 kubelet[2558]: E0120 00:34:29.909368 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:29.911224 systemd[1]: run-netns-cni\x2d97b98fbe\x2dce6e\x2db0bf\x2db191\x2dff0b972168d2.mount: Deactivated successfully. Jan 20 00:34:29.912231 containerd[1468]: time="2026-01-20T00:34:29.912141324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2srp,Uid:5393b5c7-b838-40b7-b5c7-c21832b5d797,Namespace:kube-system,Attempt:1,}" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.715 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.719 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" iface="eth0" netns="/var/run/netns/cni-28af21c8-1d3e-ac6d-5ad0-cd7d4894463e" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.720 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" iface="eth0" netns="/var/run/netns/cni-28af21c8-1d3e-ac6d-5ad0-cd7d4894463e" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.720 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" iface="eth0" netns="/var/run/netns/cni-28af21c8-1d3e-ac6d-5ad0-cd7d4894463e" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.721 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.721 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.884 [INFO][4417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.884 [INFO][4417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.889 [INFO][4417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.905 [WARNING][4417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.906 [INFO][4417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.910 [INFO][4417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:29.928139 containerd[1468]: 2026-01-20 00:34:29.919 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:29.928651 containerd[1468]: time="2026-01-20T00:34:29.928572334Z" level=info msg="TearDown network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" successfully" Jan 20 00:34:29.928651 containerd[1468]: time="2026-01-20T00:34:29.928595968Z" level=info msg="StopPodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" returns successfully" Jan 20 00:34:29.933625 containerd[1468]: time="2026-01-20T00:34:29.930814434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkvnv,Uid:2d7f8729-92e8-466b-ac93-b93fcaadeb7a,Namespace:calico-system,Attempt:1,}" Jan 20 00:34:29.933383 systemd[1]: run-netns-cni\x2d28af21c8\x2d1d3e\x2dac6d\x2d5ad0\x2dcd7d4894463e.mount: Deactivated successfully. Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.747 [INFO][4393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.748 [INFO][4393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" iface="eth0" netns="/var/run/netns/cni-ce61c2b1-9339-e298-8ee9-a94f770fb364" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.748 [INFO][4393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" iface="eth0" netns="/var/run/netns/cni-ce61c2b1-9339-e298-8ee9-a94f770fb364" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.749 [INFO][4393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" iface="eth0" netns="/var/run/netns/cni-ce61c2b1-9339-e298-8ee9-a94f770fb364" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.749 [INFO][4393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.749 [INFO][4393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.903 [INFO][4424] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.904 [INFO][4424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.909 [INFO][4424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.932 [WARNING][4424] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.933 [INFO][4424] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.937 [INFO][4424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:29.953866 containerd[1468]: 2026-01-20 00:34:29.946 [INFO][4393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:29.953866 containerd[1468]: time="2026-01-20T00:34:29.950732128Z" level=info msg="TearDown network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" successfully" Jan 20 00:34:29.953866 containerd[1468]: time="2026-01-20T00:34:29.950845159Z" level=info msg="StopPodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" returns successfully" Jan 20 00:34:29.953866 containerd[1468]: time="2026-01-20T00:34:29.952643486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-68fs6,Uid:96693105-0319-44f2-a458-134dbd8dc9b8,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:34:29.963990 systemd[1]: run-netns-cni\x2dce61c2b1\x2d9339\x2de298\x2d8ee9\x2da94f770fb364.mount: Deactivated successfully. Jan 20 00:34:30.328595 systemd-networkd[1395]: cali9879999191c: Link UP Jan 20 00:34:30.331623 systemd-networkd[1395]: cali9879999191c: Gained carrier Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.087 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--g2srp-eth0 coredns-668d6bf9bc- kube-system 5393b5c7-b838-40b7-b5c7-c21832b5d797 1049 0 2026-01-20 00:33:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-g2srp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9879999191c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.088 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.150 [INFO][4498] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" HandleID="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.150 [INFO][4498] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" HandleID="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c75c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-g2srp", "timestamp":"2026-01-20 00:34:30.150345547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.151 [INFO][4498] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.151 [INFO][4498] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.151 [INFO][4498] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.182 [INFO][4498] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.191 [INFO][4498] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.210 [INFO][4498] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.217 [INFO][4498] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.226 [INFO][4498] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.226 [INFO][4498] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.233 [INFO][4498] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4 Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.249 [INFO][4498] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.288 [INFO][4498] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.288 [INFO][4498] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" host="localhost" Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.288 [INFO][4498] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:30.349210 containerd[1468]: 2026-01-20 00:34:30.288 [INFO][4498] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" HandleID="k8s-pod-network.ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.349982 containerd[1468]: 2026-01-20 00:34:30.300 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g2srp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5393b5c7-b838-40b7-b5c7-c21832b5d797", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-g2srp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9879999191c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:30.349982 containerd[1468]: 2026-01-20 00:34:30.300 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.349982 containerd[1468]: 2026-01-20 00:34:30.300 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9879999191c ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.349982 containerd[1468]: 2026-01-20 00:34:30.327 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.349982 containerd[1468]: 2026-01-20 00:34:30.328 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g2srp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5393b5c7-b838-40b7-b5c7-c21832b5d797", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4", Pod:"coredns-668d6bf9bc-g2srp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9879999191c", MAC:"2a:ab:4b:e3:fe:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:30.349982 containerd[1468]: 2026-01-20 00:34:30.343 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g2srp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:30.476201 containerd[1468]: time="2026-01-20T00:34:30.453166771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:30.476201 containerd[1468]: time="2026-01-20T00:34:30.453260487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:30.476201 containerd[1468]: time="2026-01-20T00:34:30.453277468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:30.476201 containerd[1468]: time="2026-01-20T00:34:30.453396599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:30.528580 systemd-networkd[1395]: calif73aeeff24e: Link UP Jan 20 00:34:30.529303 systemd-networkd[1395]: calif73aeeff24e: Gained carrier Jan 20 00:34:30.587339 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Jan 20 00:34:30.594890 systemd[1]: run-containerd-runc-k8s.io-ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4-runc.oMUxPg.mount: Deactivated successfully. Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.180 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wkvnv-eth0 csi-node-driver- calico-system 2d7f8729-92e8-466b-ac93-b93fcaadeb7a 1047 0 2026-01-20 00:34:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wkvnv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif73aeeff24e [] [] }} ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.182 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.292 [INFO][4524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" HandleID="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.293 [INFO][4524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" HandleID="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wkvnv", "timestamp":"2026-01-20 00:34:30.292994785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.293 [INFO][4524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.293 [INFO][4524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.293 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.312 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.332 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.346 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.352 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.374 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.374 [INFO][4524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.377 [INFO][4524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62 Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.390 [INFO][4524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.408 [INFO][4524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.409 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" host="localhost" Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.411 [INFO][4524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:30.596549 containerd[1468]: 2026-01-20 00:34:30.411 [INFO][4524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" HandleID="k8s-pod-network.3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.597363 containerd[1468]: 2026-01-20 00:34:30.442 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wkvnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d7f8729-92e8-466b-ac93-b93fcaadeb7a", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wkvnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif73aeeff24e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:30.597363 containerd[1468]: 2026-01-20 00:34:30.443 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.597363 containerd[1468]: 2026-01-20 00:34:30.443 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif73aeeff24e ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.597363 containerd[1468]: 2026-01-20 00:34:30.531 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.597363 containerd[1468]: 2026-01-20 00:34:30.532 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wkvnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d7f8729-92e8-466b-ac93-b93fcaadeb7a", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62", Pod:"csi-node-driver-wkvnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif73aeeff24e", MAC:"32:33:82:3c:98:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:30.597363 containerd[1468]: 2026-01-20 00:34:30.552 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62" Namespace="calico-system" Pod="csi-node-driver-wkvnv" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:30.609018 systemd[1]: Started cri-containerd-ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4.scope - libcontainer container ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4. Jan 20 00:34:30.632821 systemd-networkd[1395]: cali14acf4871a4: Link UP Jan 20 00:34:30.634727 systemd-networkd[1395]: cali14acf4871a4: Gained carrier Jan 20 00:34:30.686596 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:30.698918 containerd[1468]: time="2026-01-20T00:34:30.696742661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:30.698918 containerd[1468]: time="2026-01-20T00:34:30.696811290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:30.698918 containerd[1468]: time="2026-01-20T00:34:30.696826368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:30.698918 containerd[1468]: time="2026-01-20T00:34:30.696928527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.215 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0 calico-apiserver-78c5dffbd- calico-apiserver 96693105-0319-44f2-a458-134dbd8dc9b8 1048 0 2026-01-20 00:33:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78c5dffbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78c5dffbd-68fs6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14acf4871a4 [] [] }} ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.215 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.322 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" HandleID="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.322 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" HandleID="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004600d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78c5dffbd-68fs6", "timestamp":"2026-01-20 00:34:30.322602779 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.322 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.437 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.437 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.494 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.521 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.537 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.544 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.553 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.553 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.583 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76 Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.595 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.608 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.608 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" host="localhost" Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.608 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:30.700063 containerd[1468]: 2026-01-20 00:34:30.609 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" HandleID="k8s-pod-network.01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.701032 containerd[1468]: 2026-01-20 00:34:30.619 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"96693105-0319-44f2-a458-134dbd8dc9b8", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78c5dffbd-68fs6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14acf4871a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:30.701032 containerd[1468]: 2026-01-20 00:34:30.619 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.701032 containerd[1468]: 2026-01-20 00:34:30.619 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14acf4871a4 ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.701032 containerd[1468]: 2026-01-20 00:34:30.636 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.701032 containerd[1468]: 2026-01-20 00:34:30.639 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"96693105-0319-44f2-a458-134dbd8dc9b8", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76", Pod:"calico-apiserver-78c5dffbd-68fs6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14acf4871a4", MAC:"6e:a8:48:79:e1:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:30.701032 containerd[1468]: 2026-01-20 00:34:30.694 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76" Namespace="calico-apiserver" Pod="calico-apiserver-78c5dffbd-68fs6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:30.762029 systemd[1]: Started cri-containerd-3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62.scope - libcontainer container 3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62. Jan 20 00:34:30.787130 kubelet[2558]: E0120 00:34:30.786732 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:34:30.815222 containerd[1468]: time="2026-01-20T00:34:30.815183250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g2srp,Uid:5393b5c7-b838-40b7-b5c7-c21832b5d797,Namespace:kube-system,Attempt:1,} returns sandbox id \"ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4\"" Jan 20 00:34:30.822895 containerd[1468]: time="2026-01-20T00:34:30.822130880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:30.822895 containerd[1468]: time="2026-01-20T00:34:30.822228281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:30.822895 containerd[1468]: time="2026-01-20T00:34:30.822246665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:30.822895 containerd[1468]: time="2026-01-20T00:34:30.822386747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:30.824011 kubelet[2558]: E0120 00:34:30.823349 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:30.829350 containerd[1468]: time="2026-01-20T00:34:30.829267130Z" level=info msg="CreateContainer within sandbox \"ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:34:30.864969 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:30.907635 systemd[1]: Started cri-containerd-01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76.scope - libcontainer container 01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76. Jan 20 00:34:30.910047 containerd[1468]: time="2026-01-20T00:34:30.909884535Z" level=info msg="CreateContainer within sandbox \"ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed5874edf771456de8acbcd9e5149848de95f767348c104999a6e54659826aeb\"" Jan 20 00:34:30.914723 containerd[1468]: time="2026-01-20T00:34:30.913013211Z" level=info msg="StartContainer for \"ed5874edf771456de8acbcd9e5149848de95f767348c104999a6e54659826aeb\"" Jan 20 00:34:30.953022 containerd[1468]: time="2026-01-20T00:34:30.952908801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkvnv,Uid:2d7f8729-92e8-466b-ac93-b93fcaadeb7a,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62\"" Jan 20 00:34:30.957880 containerd[1468]: time="2026-01-20T00:34:30.957732224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:34:30.966921 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:30.992798 systemd[1]: Started cri-containerd-ed5874edf771456de8acbcd9e5149848de95f767348c104999a6e54659826aeb.scope - libcontainer container ed5874edf771456de8acbcd9e5149848de95f767348c104999a6e54659826aeb. Jan 20 00:34:31.043023 containerd[1468]: time="2026-01-20T00:34:31.042863845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c5dffbd-68fs6,Uid:96693105-0319-44f2-a458-134dbd8dc9b8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76\"" Jan 20 00:34:31.044958 containerd[1468]: time="2026-01-20T00:34:31.044930194Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:31.049329 containerd[1468]: time="2026-01-20T00:34:31.049184788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:34:31.049329 containerd[1468]: time="2026-01-20T00:34:31.049247225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:34:31.051777 kubelet[2558]: E0120 00:34:31.049938 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:34:31.051777 kubelet[2558]: E0120 00:34:31.049985 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:34:31.051777 kubelet[2558]: E0120 00:34:31.050164 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:31.054364 containerd[1468]: time="2026-01-20T00:34:31.053411060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:34:31.101250 containerd[1468]: time="2026-01-20T00:34:31.101076524Z" level=info msg="StartContainer for \"ed5874edf771456de8acbcd9e5149848de95f767348c104999a6e54659826aeb\" returns successfully" Jan 20 00:34:31.145390 containerd[1468]: time="2026-01-20T00:34:31.143000297Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:31.146892 containerd[1468]: time="2026-01-20T00:34:31.146789495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:34:31.147036 containerd[1468]: time="2026-01-20T00:34:31.146897356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:34:31.147259 kubelet[2558]: E0120 00:34:31.147164 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:31.147259 kubelet[2558]: E0120 00:34:31.147232 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:31.147987 kubelet[2558]: E0120 00:34:31.147679 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75rcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-68fs6_calico-apiserver(96693105-0319-44f2-a458-134dbd8dc9b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:31.148189 containerd[1468]: time="2026-01-20T00:34:31.147950276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:34:31.148927 kubelet[2558]: E0120 00:34:31.148897 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:31.228819 containerd[1468]: time="2026-01-20T00:34:31.228280167Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:31.230883 containerd[1468]: time="2026-01-20T00:34:31.230705484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:34:31.231677 containerd[1468]: time="2026-01-20T00:34:31.231171632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:34:31.231814 kubelet[2558]: E0120 00:34:31.231684 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:34:31.232175 kubelet[2558]: E0120 00:34:31.231887 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:34:31.233004 kubelet[2558]: E0120 00:34:31.232844 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:31.235659 kubelet[2558]: E0120 00:34:31.235319 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:31.472390 containerd[1468]: time="2026-01-20T00:34:31.472040298Z" level=info msg="StopPodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\"" Jan 20 00:34:31.575204 containerd[1468]: time="2026-01-20T00:34:31.566183329Z" level=info msg="StopPodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\"" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.526 [WARNING][4763] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g2srp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5393b5c7-b838-40b7-b5c7-c21832b5d797", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4", Pod:"coredns-668d6bf9bc-g2srp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9879999191c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.526 [INFO][4763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.526 [INFO][4763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" iface="eth0" netns="" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.526 [INFO][4763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.526 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.565 [INFO][4771] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.565 [INFO][4771] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.565 [INFO][4771] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.592 [WARNING][4771] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.592 [INFO][4771] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.594 [INFO][4771] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:31.603118 containerd[1468]: 2026-01-20 00:34:31.599 [INFO][4763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.604117 containerd[1468]: time="2026-01-20T00:34:31.604091528Z" level=info msg="TearDown network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" successfully" Jan 20 00:34:31.604170 containerd[1468]: time="2026-01-20T00:34:31.604157271Z" level=info msg="StopPodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" returns successfully" Jan 20 00:34:31.614792 containerd[1468]: time="2026-01-20T00:34:31.614687428Z" level=info msg="RemovePodSandbox for \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\"" Jan 20 00:34:31.617923 containerd[1468]: time="2026-01-20T00:34:31.617831164Z" level=info msg="Forcibly stopping sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\"" Jan 20 00:34:31.797995 kubelet[2558]: E0120 00:34:31.797935 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:31.801994 kubelet[2558]: E0120 00:34:31.801896 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.703 [WARNING][4806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--g2srp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5393b5c7-b838-40b7-b5c7-c21832b5d797", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee1dffae194d88763b85d331ad5b6d899fe80a8af28ecf1a8a8115ee98f885d4", Pod:"coredns-668d6bf9bc-g2srp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9879999191c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" iface="eth0" netns="" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.758 [INFO][4819] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.759 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.759 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.780 [WARNING][4819] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.780 [INFO][4819] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" HandleID="k8s-pod-network.70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Workload="localhost-k8s-coredns--668d6bf9bc--g2srp-eth0" Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.789 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:31.806972 containerd[1468]: 2026-01-20 00:34:31.795 [INFO][4806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae" Jan 20 00:34:31.809146 containerd[1468]: time="2026-01-20T00:34:31.809110128Z" level=info msg="TearDown network for sandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" successfully" Jan 20 00:34:31.810714 kubelet[2558]: E0120 00:34:31.810457 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.700 [INFO][4790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.700 [INFO][4790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" iface="eth0" netns="/var/run/netns/cni-a3d4894b-6707-f5b9-e70c-e5bfd4b05b2c" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.704 [INFO][4790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" iface="eth0" netns="/var/run/netns/cni-a3d4894b-6707-f5b9-e70c-e5bfd4b05b2c" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" iface="eth0" netns="/var/run/netns/cni-a3d4894b-6707-f5b9-e70c-e5bfd4b05b2c" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.705 [INFO][4790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.764 [INFO][4818] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.765 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.788 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.811 [WARNING][4818] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.812 [INFO][4818] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.817 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:31.830863 containerd[1468]: 2026-01-20 00:34:31.823 [INFO][4790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:34:31.832761 containerd[1468]: time="2026-01-20T00:34:31.832616306Z" level=info msg="TearDown network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" successfully" Jan 20 00:34:31.832761 containerd[1468]: time="2026-01-20T00:34:31.832694001Z" level=info msg="StopPodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" returns successfully" Jan 20 00:34:31.834246 kubelet[2558]: E0120 00:34:31.834086 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:31.837338 systemd[1]: run-netns-cni\x2da3d4894b\x2d6707\x2df5b9\x2de70c\x2de5bfd4b05b2c.mount: Deactivated successfully. Jan 20 00:34:31.837858 containerd[1468]: time="2026-01-20T00:34:31.837308583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zhgb,Uid:7b39ba6d-0875-4d35-90a8-c9d91492b367,Namespace:kube-system,Attempt:1,}" Jan 20 00:34:31.907318 containerd[1468]: time="2026-01-20T00:34:31.907216703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:34:31.907599 containerd[1468]: time="2026-01-20T00:34:31.907326718Z" level=info msg="RemovePodSandbox \"70e3eddf6a71b852e7f2ce157f34e56bb086d364d198d2b977e1a1fff0a77cae\" returns successfully" Jan 20 00:34:31.908576 containerd[1468]: time="2026-01-20T00:34:31.908346444Z" level=info msg="StopPodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\"" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:31.989 [WARNING][4843] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wkvnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d7f8729-92e8-466b-ac93-b93fcaadeb7a", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62", Pod:"csi-node-driver-wkvnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif73aeeff24e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:31.990 [INFO][4843] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:31.990 [INFO][4843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" iface="eth0" netns="" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:31.990 [INFO][4843] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:31.990 [INFO][4843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.036 [INFO][4867] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.037 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.037 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.047 [WARNING][4867] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.047 [INFO][4867] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.051 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:32.062183 containerd[1468]: 2026-01-20 00:34:32.056 [INFO][4843] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.062183 containerd[1468]: time="2026-01-20T00:34:32.061882564Z" level=info msg="TearDown network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" successfully" Jan 20 00:34:32.062183 containerd[1468]: time="2026-01-20T00:34:32.061922648Z" level=info msg="StopPodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" returns successfully" Jan 20 00:34:32.065341 containerd[1468]: time="2026-01-20T00:34:32.064921144Z" level=info msg="RemovePodSandbox for \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\"" Jan 20 00:34:32.065341 containerd[1468]: time="2026-01-20T00:34:32.064960767Z" level=info msg="Forcibly stopping sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\"" Jan 20 00:34:32.126632 systemd-networkd[1395]: calif73aeeff24e: Gained IPv6LL Jan 20 00:34:32.193883 systemd-networkd[1395]: cali511405e0ca2: Link UP Jan 20 00:34:32.195255 systemd-networkd[1395]: cali511405e0ca2: Gained carrier Jan 20 00:34:32.226898 kubelet[2558]: I0120 00:34:32.226767 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g2srp" podStartSLOduration=58.226741734 podStartE2EDuration="58.226741734s" podCreationTimestamp="2026-01-20 00:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:34:31.901029857 +0000 UTC m=+60.716411681" watchObservedRunningTime="2026-01-20 00:34:32.226741734 +0000 UTC m=+61.042123568" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.015 [INFO][4850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0 coredns-668d6bf9bc- kube-system 7b39ba6d-0875-4d35-90a8-c9d91492b367 1093 0 2026-01-20 00:33:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7zhgb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali511405e0ca2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.016 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.089 [INFO][4876] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" HandleID="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.089 [INFO][4876] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" HandleID="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7zhgb", "timestamp":"2026-01-20 00:34:32.089045838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.089 [INFO][4876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.089 [INFO][4876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.089 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.101 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.113 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.125 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.128 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.134 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.134 [INFO][4876] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.142 [INFO][4876] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17 Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.150 [INFO][4876] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.183 [INFO][4876] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.183 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" host="localhost" Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.183 [INFO][4876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:32.232098 containerd[1468]: 2026-01-20 00:34:32.183 [INFO][4876] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" HandleID="k8s-pod-network.c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.233914 containerd[1468]: 2026-01-20 00:34:32.187 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b39ba6d-0875-4d35-90a8-c9d91492b367", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7zhgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali511405e0ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.233914 containerd[1468]: 2026-01-20 00:34:32.188 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.233914 containerd[1468]: 2026-01-20 00:34:32.188 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali511405e0ca2 ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.233914 containerd[1468]: 2026-01-20 00:34:32.195 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.233914 containerd[1468]: 2026-01-20 00:34:32.198 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b39ba6d-0875-4d35-90a8-c9d91492b367", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17", Pod:"coredns-668d6bf9bc-7zhgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali511405e0ca2", MAC:"1a:18:81:23:c1:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.233914 containerd[1468]: 2026-01-20 00:34:32.228 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zhgb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.129 [WARNING][4893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wkvnv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d7f8729-92e8-466b-ac93-b93fcaadeb7a", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cd281c16688a8080e3716c55f983835b3a72ad574c34bb5f89f5cf264182a62", Pod:"csi-node-driver-wkvnv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif73aeeff24e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.130 [INFO][4893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.130 [INFO][4893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" iface="eth0" netns="" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.130 [INFO][4893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.130 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.205 [INFO][4902] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.205 [INFO][4902] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.205 [INFO][4902] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.227 [WARNING][4902] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.227 [INFO][4902] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" HandleID="k8s-pod-network.41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Workload="localhost-k8s-csi--node--driver--wkvnv-eth0" Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.232 [INFO][4902] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:32.241821 containerd[1468]: 2026-01-20 00:34:32.237 [INFO][4893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3" Jan 20 00:34:32.241821 containerd[1468]: time="2026-01-20T00:34:32.241617937Z" level=info msg="TearDown network for sandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" successfully" Jan 20 00:34:32.249378 containerd[1468]: time="2026-01-20T00:34:32.249187157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:34:32.249378 containerd[1468]: time="2026-01-20T00:34:32.249247689Z" level=info msg="RemovePodSandbox \"41b361974c332ba83312cb0f6879bf73d87a3716982d60552e9c3ed043afdcf3\" returns successfully" Jan 20 00:34:32.250653 containerd[1468]: time="2026-01-20T00:34:32.250627158Z" level=info msg="StopPodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\"" Jan 20 00:34:32.275930 containerd[1468]: time="2026-01-20T00:34:32.275706971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:32.276187 containerd[1468]: time="2026-01-20T00:34:32.276116984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:32.276341 containerd[1468]: time="2026-01-20T00:34:32.276294394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:32.278626 containerd[1468]: time="2026-01-20T00:34:32.278346905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:32.321976 systemd[1]: Started cri-containerd-c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17.scope - libcontainer container c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17. Jan 20 00:34:32.347193 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:32.383691 systemd-networkd[1395]: cali9879999191c: Gained IPv6LL Jan 20 00:34:32.409964 containerd[1468]: time="2026-01-20T00:34:32.409828586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zhgb,Uid:7b39ba6d-0875-4d35-90a8-c9d91492b367,Namespace:kube-system,Attempt:1,} returns sandbox id \"c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17\"" Jan 20 00:34:32.412241 kubelet[2558]: E0120 00:34:32.412009 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:32.416456 containerd[1468]: time="2026-01-20T00:34:32.416258382Z" level=info msg="CreateContainer within sandbox \"c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.322 [WARNING][4942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388", Pod:"calico-apiserver-78c5dffbd-t9x7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6e0022eead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.323 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.323 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" iface="eth0" netns="" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.323 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.323 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.396 [INFO][4971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.396 [INFO][4971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.396 [INFO][4971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.410 [WARNING][4971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.410 [INFO][4971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.414 [INFO][4971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:32.424585 containerd[1468]: 2026-01-20 00:34:32.420 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.425561 containerd[1468]: time="2026-01-20T00:34:32.424762810Z" level=info msg="TearDown network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" successfully" Jan 20 00:34:32.425561 containerd[1468]: time="2026-01-20T00:34:32.424792525Z" level=info msg="StopPodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" returns successfully" Jan 20 00:34:32.426595 containerd[1468]: time="2026-01-20T00:34:32.425843643Z" level=info msg="RemovePodSandbox for \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\"" Jan 20 00:34:32.426595 containerd[1468]: time="2026-01-20T00:34:32.425879149Z" level=info msg="Forcibly stopping sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\"" Jan 20 00:34:32.444703 containerd[1468]: time="2026-01-20T00:34:32.444592420Z" level=info msg="CreateContainer within sandbox \"c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2d59ec8f5c649db2a04a5941bdbed0aa955f56c8bf138c304a2e3a361a65a7e\"" Jan 20 00:34:32.449845 containerd[1468]: time="2026-01-20T00:34:32.449730397Z" level=info msg="StartContainer for \"a2d59ec8f5c649db2a04a5941bdbed0aa955f56c8bf138c304a2e3a361a65a7e\"" Jan 20 00:34:32.529569 systemd[1]: Started cri-containerd-a2d59ec8f5c649db2a04a5941bdbed0aa955f56c8bf138c304a2e3a361a65a7e.scope - libcontainer container a2d59ec8f5c649db2a04a5941bdbed0aa955f56c8bf138c304a2e3a361a65a7e. Jan 20 00:34:32.617280 containerd[1468]: time="2026-01-20T00:34:32.615991298Z" level=info msg="StartContainer for \"a2d59ec8f5c649db2a04a5941bdbed0aa955f56c8bf138c304a2e3a361a65a7e\" returns successfully" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.540 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b9eef891bfe1e55233fbb629f3bb423e52e87e07e5e001957fc2e0e49b39388", Pod:"calico-apiserver-78c5dffbd-t9x7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6e0022eead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.541 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.541 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" iface="eth0" netns="" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.541 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.541 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.596 [INFO][5032] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.596 [INFO][5032] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.596 [INFO][5032] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.610 [WARNING][5032] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.612 [INFO][5032] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" HandleID="k8s-pod-network.2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Workload="localhost-k8s-calico--apiserver--78c5dffbd--t9x7r-eth0" Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.619 [INFO][5032] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:32.629298 containerd[1468]: 2026-01-20 00:34:32.623 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932" Jan 20 00:34:32.630916 containerd[1468]: time="2026-01-20T00:34:32.630046939Z" level=info msg="TearDown network for sandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" successfully" Jan 20 00:34:32.635988 systemd-networkd[1395]: cali14acf4871a4: Gained IPv6LL Jan 20 00:34:32.642227 containerd[1468]: time="2026-01-20T00:34:32.642014440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:34:32.642227 containerd[1468]: time="2026-01-20T00:34:32.642088808Z" level=info msg="RemovePodSandbox \"2737a38dd53c89f0d7546659cd85f144eb90b1c4c5cb60703b092edf08ad6932\" returns successfully" Jan 20 00:34:32.643577 containerd[1468]: time="2026-01-20T00:34:32.643207516Z" level=info msg="StopPodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\"" Jan 20 00:34:32.818329 kubelet[2558]: E0120 00:34:32.818237 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:32.831571 kubelet[2558]: E0120 00:34:32.829936 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:32.834584 kubelet[2558]: E0120 00:34:32.834554 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.736 [WARNING][5063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"96693105-0319-44f2-a458-134dbd8dc9b8", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76", Pod:"calico-apiserver-78c5dffbd-68fs6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14acf4871a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.736 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.736 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" iface="eth0" netns="" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.736 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.736 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.807 [INFO][5073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.808 [INFO][5073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.808 [INFO][5073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.822 [WARNING][5073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.822 [INFO][5073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.826 [INFO][5073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:32.839282 containerd[1468]: 2026-01-20 00:34:32.832 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:32.839282 containerd[1468]: time="2026-01-20T00:34:32.839078193Z" level=info msg="TearDown network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" successfully" Jan 20 00:34:32.839282 containerd[1468]: time="2026-01-20T00:34:32.839109541Z" level=info msg="StopPodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" returns successfully" Jan 20 00:34:32.840139 containerd[1468]: time="2026-01-20T00:34:32.840011934Z" level=info msg="RemovePodSandbox for \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\"" Jan 20 00:34:32.840139 containerd[1468]: time="2026-01-20T00:34:32.840057359Z" level=info msg="Forcibly stopping sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\"" Jan 20 00:34:32.840651 kubelet[2558]: E0120 00:34:32.840304 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:32.913080 kubelet[2558]: I0120 00:34:32.912928 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7zhgb" podStartSLOduration=58.912905631 podStartE2EDuration="58.912905631s" podCreationTimestamp="2026-01-20 00:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:34:32.849356199 +0000 UTC m=+61.664738024" watchObservedRunningTime="2026-01-20 00:34:32.912905631 +0000 UTC m=+61.728287454" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:32.994 [WARNING][5091] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0", GenerateName:"calico-apiserver-78c5dffbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"96693105-0319-44f2-a458-134dbd8dc9b8", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c5dffbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d4097555192d4ea9c1f9dc30d9e8a23939881928974f814dde82980d5bfd76", Pod:"calico-apiserver-78c5dffbd-68fs6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14acf4871a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:32.994 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:32.994 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" iface="eth0" netns="" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:32.994 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:32.994 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.039 [INFO][5102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.040 [INFO][5102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.040 [INFO][5102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.048 [WARNING][5102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.048 [INFO][5102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" HandleID="k8s-pod-network.f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Workload="localhost-k8s-calico--apiserver--78c5dffbd--68fs6-eth0" Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.069 [INFO][5102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:33.095971 containerd[1468]: 2026-01-20 00:34:33.092 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e" Jan 20 00:34:33.096918 containerd[1468]: time="2026-01-20T00:34:33.095990540Z" level=info msg="TearDown network for sandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" successfully" Jan 20 00:34:33.102736 containerd[1468]: time="2026-01-20T00:34:33.102409552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:34:33.102736 containerd[1468]: time="2026-01-20T00:34:33.102606770Z" level=info msg="RemovePodSandbox \"f94d76c7a0c28e059d80672e372cef599efaee66892e5c536f8aa58d4a2f1a6e\" returns successfully" Jan 20 00:34:33.103898 containerd[1468]: time="2026-01-20T00:34:33.103717818Z" level=info msg="StopPodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\"" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.181 [WARNING][5120] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" WorkloadEndpoint="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.182 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.182 [INFO][5120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" iface="eth0" netns="" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.182 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.182 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.219 [INFO][5128] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.219 [INFO][5128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.220 [INFO][5128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.227 [WARNING][5128] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.227 [INFO][5128] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.230 [INFO][5128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:33.238034 containerd[1468]: 2026-01-20 00:34:33.234 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.238034 containerd[1468]: time="2026-01-20T00:34:33.238029799Z" level=info msg="TearDown network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" successfully" Jan 20 00:34:33.238741 containerd[1468]: time="2026-01-20T00:34:33.238053543Z" level=info msg="StopPodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" returns successfully" Jan 20 00:34:33.238814 containerd[1468]: time="2026-01-20T00:34:33.238762270Z" level=info msg="RemovePodSandbox for \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\"" Jan 20 00:34:33.238841 containerd[1468]: time="2026-01-20T00:34:33.238811983Z" level=info msg="Forcibly stopping sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\"" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.305 [WARNING][5146] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" WorkloadEndpoint="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.306 [INFO][5146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.306 [INFO][5146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" iface="eth0" netns="" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.306 [INFO][5146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.306 [INFO][5146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.336 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.336 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.336 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.344 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.344 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" HandleID="k8s-pod-network.317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Workload="localhost-k8s-whisker--6d544c66b--z7977-eth0" Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.347 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:33.352841 containerd[1468]: 2026-01-20 00:34:33.349 [INFO][5146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b" Jan 20 00:34:33.353164 containerd[1468]: time="2026-01-20T00:34:33.352901615Z" level=info msg="TearDown network for sandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" successfully" Jan 20 00:34:33.360120 containerd[1468]: time="2026-01-20T00:34:33.359914678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:34:33.360120 containerd[1468]: time="2026-01-20T00:34:33.360022238Z" level=info msg="RemovePodSandbox \"317fa087aaeec0cf63e9fbfca4c9017edb49b0908936d2cfd3138469c7864a6b\" returns successfully" Jan 20 00:34:33.845675 kubelet[2558]: E0120 00:34:33.844672 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:33.845675 kubelet[2558]: E0120 00:34:33.845372 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:34.171293 systemd-networkd[1395]: cali511405e0ca2: Gained IPv6LL Jan 20 00:34:39.570865 containerd[1468]: time="2026-01-20T00:34:39.570290277Z" level=info msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.658 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.658 [INFO][5189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" iface="eth0" netns="/var/run/netns/cni-de35ee70-ccff-dc59-3fd4-c1fff05d7d54" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.659 [INFO][5189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" iface="eth0" netns="/var/run/netns/cni-de35ee70-ccff-dc59-3fd4-c1fff05d7d54" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.660 [INFO][5189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" iface="eth0" netns="/var/run/netns/cni-de35ee70-ccff-dc59-3fd4-c1fff05d7d54" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.660 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.660 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.695 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.696 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.696 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.710 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.711 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.714 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:39.721732 containerd[1468]: 2026-01-20 00:34:39.717 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:34:39.723951 containerd[1468]: time="2026-01-20T00:34:39.723750833Z" level=info msg="TearDown network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" successfully" Jan 20 00:34:39.723951 containerd[1468]: time="2026-01-20T00:34:39.723788974Z" level=info msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" returns successfully" Jan 20 00:34:39.724959 containerd[1468]: time="2026-01-20T00:34:39.724856363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654778bb87-lw5jd,Uid:a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c,Namespace:calico-system,Attempt:1,}" Jan 20 00:34:39.725905 systemd[1]: run-netns-cni\x2dde35ee70\x2dccff\x2ddc59\x2d3fd4\x2dc1fff05d7d54.mount: Deactivated successfully. Jan 20 00:34:40.027692 systemd-networkd[1395]: cali4a5df3b5f43: Link UP Jan 20 00:34:40.029745 systemd-networkd[1395]: cali4a5df3b5f43: Gained carrier Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.826 [INFO][5207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0 calico-kube-controllers-654778bb87- calico-system a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c 1160 0 2026-01-20 00:34:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:654778bb87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-654778bb87-lw5jd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a5df3b5f43 [] [] }} ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.826 [INFO][5207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.952 [INFO][5221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" HandleID="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.953 [INFO][5221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" HandleID="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-654778bb87-lw5jd", "timestamp":"2026-01-20 00:34:39.952555949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.953 [INFO][5221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.953 [INFO][5221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.954 [INFO][5221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.972 [INFO][5221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.981 [INFO][5221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.990 [INFO][5221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.994 [INFO][5221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.998 [INFO][5221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:39.998 [INFO][5221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:40.002 [INFO][5221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:40.009 [INFO][5221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:40.018 [INFO][5221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:40.019 [INFO][5221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" host="localhost" Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:40.019 [INFO][5221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:40.068799 containerd[1468]: 2026-01-20 00:34:40.019 [INFO][5221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" HandleID="k8s-pod-network.5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.071310 containerd[1468]: 2026-01-20 00:34:40.023 [INFO][5207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0", GenerateName:"calico-kube-controllers-654778bb87-", Namespace:"calico-system", SelfLink:"", UID:"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"654778bb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-654778bb87-lw5jd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a5df3b5f43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:40.071310 containerd[1468]: 2026-01-20 00:34:40.023 [INFO][5207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.071310 containerd[1468]: 2026-01-20 00:34:40.023 [INFO][5207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a5df3b5f43 ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.071310 containerd[1468]: 2026-01-20 00:34:40.031 [INFO][5207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.071310 containerd[1468]: 2026-01-20 00:34:40.032 [INFO][5207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0", GenerateName:"calico-kube-controllers-654778bb87-", Namespace:"calico-system", SelfLink:"", UID:"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"654778bb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e", Pod:"calico-kube-controllers-654778bb87-lw5jd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a5df3b5f43", MAC:"22:a1:54:cf:1f:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:40.071310 containerd[1468]: 2026-01-20 00:34:40.054 [INFO][5207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e" Namespace="calico-system" Pod="calico-kube-controllers-654778bb87-lw5jd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:34:40.297071 containerd[1468]: time="2026-01-20T00:34:40.295682503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:40.297071 containerd[1468]: time="2026-01-20T00:34:40.295878007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:40.297071 containerd[1468]: time="2026-01-20T00:34:40.295921137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:40.297071 containerd[1468]: time="2026-01-20T00:34:40.296068132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:40.340735 systemd[1]: Started cri-containerd-5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e.scope - libcontainer container 5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e. Jan 20 00:34:40.367576 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:40.407804 containerd[1468]: time="2026-01-20T00:34:40.407294994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654778bb87-lw5jd,Uid:a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c,Namespace:calico-system,Attempt:1,} returns sandbox id \"5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e\"" Jan 20 00:34:40.410697 containerd[1468]: time="2026-01-20T00:34:40.410630591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:34:40.476983 containerd[1468]: time="2026-01-20T00:34:40.476910199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:40.479649 containerd[1468]: time="2026-01-20T00:34:40.479347937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:34:40.479649 containerd[1468]: time="2026-01-20T00:34:40.479562963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:34:40.480098 kubelet[2558]: E0120 00:34:40.479990 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:34:40.480812 kubelet[2558]: E0120 00:34:40.480103 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:34:40.480812 kubelet[2558]: E0120 00:34:40.480308 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9qvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-654778bb87-lw5jd_calico-system(a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:40.482130 kubelet[2558]: E0120 00:34:40.481932 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:40.936101 kubelet[2558]: E0120 00:34:40.935979 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:41.595048 systemd-networkd[1395]: cali4a5df3b5f43: Gained IPv6LL Jan 20 00:34:41.940086 kubelet[2558]: E0120 00:34:41.937907 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:42.565013 containerd[1468]: time="2026-01-20T00:34:42.564748557Z" level=info msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" Jan 20 00:34:42.566853 containerd[1468]: time="2026-01-20T00:34:42.565839933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:34:42.635873 containerd[1468]: time="2026-01-20T00:34:42.635721585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:42.639099 containerd[1468]: time="2026-01-20T00:34:42.639018560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:34:42.639393 containerd[1468]: time="2026-01-20T00:34:42.639120680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:34:42.639595 kubelet[2558]: E0120 00:34:42.639550 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:34:42.639857 kubelet[2558]: E0120 00:34:42.639708 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:34:42.640131 kubelet[2558]: E0120 00:34:42.639986 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aa4fe703c3ce4a2fad7a06c0824f3068,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:42.644703 containerd[1468]: time="2026-01-20T00:34:42.643684870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.646 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.648 [INFO][5301] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" iface="eth0" netns="/var/run/netns/cni-6bb8886e-e51f-49be-7352-120801ca0dd2" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.648 [INFO][5301] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" iface="eth0" netns="/var/run/netns/cni-6bb8886e-e51f-49be-7352-120801ca0dd2" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.649 [INFO][5301] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" iface="eth0" netns="/var/run/netns/cni-6bb8886e-e51f-49be-7352-120801ca0dd2" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.649 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.649 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.688 [INFO][5311] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.689 [INFO][5311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.689 [INFO][5311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.695 [WARNING][5311] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.696 [INFO][5311] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.699 [INFO][5311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:42.707193 containerd[1468]: 2026-01-20 00:34:42.704 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:34:42.708156 containerd[1468]: time="2026-01-20T00:34:42.708080331Z" level=info msg="TearDown network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" successfully" Jan 20 00:34:42.708156 containerd[1468]: time="2026-01-20T00:34:42.708131616Z" level=info msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" returns successfully" Jan 20 00:34:42.709236 containerd[1468]: time="2026-01-20T00:34:42.709215100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bx6wj,Uid:4fd51efe-cc95-4265-995a-08b13dbea3b1,Namespace:calico-system,Attempt:1,}" Jan 20 00:34:42.712916 systemd[1]: run-netns-cni\x2d6bb8886e\x2de51f\x2d49be\x2d7352\x2d120801ca0dd2.mount: Deactivated successfully. Jan 20 00:34:42.718138 containerd[1468]: time="2026-01-20T00:34:42.717944091Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:42.728074 containerd[1468]: time="2026-01-20T00:34:42.727978661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:34:42.728074 containerd[1468]: time="2026-01-20T00:34:42.728008897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:34:42.728324 kubelet[2558]: E0120 00:34:42.728245 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:42.728324 kubelet[2558]: E0120 00:34:42.728303 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:42.728851 kubelet[2558]: E0120 00:34:42.728664 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pffxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-t9x7r_calico-apiserver(0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:42.730841 kubelet[2558]: E0120 00:34:42.730011 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:42.731178 containerd[1468]: time="2026-01-20T00:34:42.730472148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:34:42.796243 containerd[1468]: time="2026-01-20T00:34:42.796163115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:42.798071 containerd[1468]: time="2026-01-20T00:34:42.797910392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:34:42.798188 containerd[1468]: time="2026-01-20T00:34:42.798045714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:34:42.798560 kubelet[2558]: E0120 00:34:42.798390 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:34:42.798694 kubelet[2558]: E0120 00:34:42.798576 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:34:42.799055 kubelet[2558]: E0120 00:34:42.798717 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:42.800470 kubelet[2558]: E0120 00:34:42.800303 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:34:42.917898 systemd-networkd[1395]: calie62424d36d3: Link UP Jan 20 00:34:42.922262 systemd-networkd[1395]: calie62424d36d3: Gained carrier Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.781 [INFO][5318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--bx6wj-eth0 goldmane-666569f655- calico-system 4fd51efe-cc95-4265-995a-08b13dbea3b1 1179 0 2026-01-20 00:34:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-bx6wj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie62424d36d3 [] [] }} ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.781 [INFO][5318] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.833 [INFO][5333] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" HandleID="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.833 [INFO][5333] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" HandleID="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-bx6wj", "timestamp":"2026-01-20 00:34:42.833686911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.833 [INFO][5333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.834 [INFO][5333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.834 [INFO][5333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.843 [INFO][5333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.849 [INFO][5333] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.856 [INFO][5333] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.869 [INFO][5333] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.874 [INFO][5333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.875 [INFO][5333] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.880 [INFO][5333] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9 Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.890 [INFO][5333] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.906 [INFO][5333] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.908 [INFO][5333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" host="localhost" Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.908 [INFO][5333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:34:42.943395 containerd[1468]: 2026-01-20 00:34:42.908 [INFO][5333] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" HandleID="k8s-pod-network.a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.945471 containerd[1468]: 2026-01-20 00:34:42.913 [INFO][5318] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bx6wj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4fd51efe-cc95-4265-995a-08b13dbea3b1", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-bx6wj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie62424d36d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:42.945471 containerd[1468]: 2026-01-20 00:34:42.913 [INFO][5318] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.945471 containerd[1468]: 2026-01-20 00:34:42.913 [INFO][5318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie62424d36d3 ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.945471 containerd[1468]: 2026-01-20 00:34:42.922 [INFO][5318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.945471 containerd[1468]: 2026-01-20 00:34:42.923 [INFO][5318] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bx6wj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4fd51efe-cc95-4265-995a-08b13dbea3b1", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9", Pod:"goldmane-666569f655-bx6wj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie62424d36d3", MAC:"7a:42:9c:a6:5e:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:34:42.945471 containerd[1468]: 2026-01-20 00:34:42.938 [INFO][5318] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9" Namespace="calico-system" Pod="goldmane-666569f655-bx6wj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:34:42.977691 containerd[1468]: time="2026-01-20T00:34:42.977382858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:34:42.977691 containerd[1468]: time="2026-01-20T00:34:42.977573403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:34:42.977927 containerd[1468]: time="2026-01-20T00:34:42.977689891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:42.978599 containerd[1468]: time="2026-01-20T00:34:42.978528824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:34:43.022818 systemd[1]: Started cri-containerd-a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9.scope - libcontainer container a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9. Jan 20 00:34:43.054902 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:34:43.097834 containerd[1468]: time="2026-01-20T00:34:43.097800118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bx6wj,Uid:4fd51efe-cc95-4265-995a-08b13dbea3b1,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9\"" Jan 20 00:34:43.101296 containerd[1468]: time="2026-01-20T00:34:43.101140045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:34:43.176881 containerd[1468]: time="2026-01-20T00:34:43.176669216Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:43.178836 containerd[1468]: time="2026-01-20T00:34:43.178601622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:34:43.178836 containerd[1468]: time="2026-01-20T00:34:43.178713176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:34:43.178986 kubelet[2558]: E0120 00:34:43.178868 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:34:43.178986 kubelet[2558]: E0120 00:34:43.178923 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:34:43.179302 kubelet[2558]: E0120 00:34:43.179133 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt2mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bx6wj_calico-system(4fd51efe-cc95-4265-995a-08b13dbea3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:43.180942 kubelet[2558]: E0120 00:34:43.180755 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:43.945001 kubelet[2558]: E0120 00:34:43.944919 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:44.474927 systemd-networkd[1395]: calie62424d36d3: Gained IPv6LL Jan 20 00:34:44.959617 kubelet[2558]: E0120 00:34:44.959469 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:45.565355 containerd[1468]: time="2026-01-20T00:34:45.565322613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:34:45.627338 containerd[1468]: time="2026-01-20T00:34:45.627159276Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:45.629339 containerd[1468]: time="2026-01-20T00:34:45.629197999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:34:45.629339 containerd[1468]: time="2026-01-20T00:34:45.629285344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:34:45.629655 kubelet[2558]: E0120 00:34:45.629592 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:34:45.629655 kubelet[2558]: E0120 00:34:45.629636 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:34:45.629844 kubelet[2558]: E0120 00:34:45.629725 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:45.632547 containerd[1468]: time="2026-01-20T00:34:45.632328337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:34:45.714282 containerd[1468]: time="2026-01-20T00:34:45.714173571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:45.715923 containerd[1468]: time="2026-01-20T00:34:45.715708203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:34:45.715923 containerd[1468]: time="2026-01-20T00:34:45.715836854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:34:45.716185 kubelet[2558]: E0120 00:34:45.716097 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:34:45.716336 kubelet[2558]: E0120 00:34:45.716188 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:34:45.716715 kubelet[2558]: E0120 00:34:45.716584 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:45.718649 kubelet[2558]: E0120 00:34:45.718575 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:46.566319 containerd[1468]: time="2026-01-20T00:34:46.566222599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:34:46.629170 containerd[1468]: time="2026-01-20T00:34:46.629004179Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:46.630806 containerd[1468]: time="2026-01-20T00:34:46.630722144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:34:46.630933 containerd[1468]: time="2026-01-20T00:34:46.630807129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:34:46.631376 kubelet[2558]: E0120 00:34:46.631186 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:46.631376 kubelet[2558]: E0120 00:34:46.631296 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:34:46.631962 kubelet[2558]: E0120 00:34:46.631617 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75rcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-68fs6_calico-apiserver(96693105-0319-44f2-a458-134dbd8dc9b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:46.633596 kubelet[2558]: E0120 00:34:46.633331 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:47.576575 kubelet[2558]: E0120 00:34:47.576395 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:51.562578 kubelet[2558]: E0120 00:34:51.562398 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:52.336274 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:59888.service - OpenSSH per-connection server daemon (10.0.0.1:59888). Jan 20 00:34:52.433149 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 59888 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:52.436852 sshd[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:52.445803 systemd-logind[1452]: New session 10 of user core. Jan 20 00:34:52.460039 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:34:52.745960 sshd[5406]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:52.752007 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:59888.service: Deactivated successfully. Jan 20 00:34:52.755983 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:34:52.759368 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:34:52.763284 systemd-logind[1452]: Removed session 10. Jan 20 00:34:53.563946 kubelet[2558]: E0120 00:34:53.563694 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:34:55.578244 containerd[1468]: time="2026-01-20T00:34:55.578019630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:34:55.661271 containerd[1468]: time="2026-01-20T00:34:55.661195773Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:55.663648 containerd[1468]: time="2026-01-20T00:34:55.663370928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:34:55.663928 containerd[1468]: time="2026-01-20T00:34:55.663789149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:34:55.664275 kubelet[2558]: E0120 00:34:55.664078 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:34:55.664846 kubelet[2558]: E0120 00:34:55.664272 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:34:55.664846 kubelet[2558]: E0120 00:34:55.664612 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9qvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-654778bb87-lw5jd_calico-system(a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:55.666078 kubelet[2558]: E0120 00:34:55.665919 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:34:57.577830 kubelet[2558]: E0120 00:34:57.577352 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:57.584906 containerd[1468]: time="2026-01-20T00:34:57.583716889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:34:57.588549 kubelet[2558]: E0120 00:34:57.586281 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:34:57.652348 containerd[1468]: time="2026-01-20T00:34:57.652103992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:34:57.654071 containerd[1468]: time="2026-01-20T00:34:57.653773353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:34:57.654071 containerd[1468]: time="2026-01-20T00:34:57.654027518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:34:57.654328 kubelet[2558]: E0120 00:34:57.654258 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:34:57.654403 kubelet[2558]: E0120 00:34:57.654336 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:34:57.654837 kubelet[2558]: E0120 00:34:57.654738 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt2mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bx6wj_calico-system(4fd51efe-cc95-4265-995a-08b13dbea3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:34:57.657130 kubelet[2558]: E0120 00:34:57.657030 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:34:57.770954 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:59904.service - OpenSSH per-connection server daemon (10.0.0.1:59904). Jan 20 00:34:57.839633 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:57.842593 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:57.854350 systemd-logind[1452]: New session 11 of user core. Jan 20 00:34:57.863769 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:34:57.883882 kubelet[2558]: E0120 00:34:57.882919 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:34:58.015160 sshd[5455]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:58.020702 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:59904.service: Deactivated successfully. Jan 20 00:34:58.023477 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:34:58.024623 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:34:58.026655 systemd-logind[1452]: Removed session 11. Jan 20 00:34:58.564845 kubelet[2558]: E0120 00:34:58.564777 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:34:58.565669 kubelet[2558]: E0120 00:34:58.564875 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:35:03.045186 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:48406.service - OpenSSH per-connection server daemon (10.0.0.1:48406). Jan 20 00:35:03.098350 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 48406 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:03.100713 sshd[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:03.106737 systemd-logind[1452]: New session 12 of user core. Jan 20 00:35:03.122727 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:35:03.279742 sshd[5480]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:03.286368 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:48406.service: Deactivated successfully. Jan 20 00:35:03.290233 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:35:03.292775 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:35:03.295594 systemd-logind[1452]: Removed session 12. Jan 20 00:35:04.564780 containerd[1468]: time="2026-01-20T00:35:04.564735000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:35:04.659254 containerd[1468]: time="2026-01-20T00:35:04.659142330Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:04.660987 containerd[1468]: time="2026-01-20T00:35:04.660864006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:35:04.660987 containerd[1468]: time="2026-01-20T00:35:04.660907131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:35:04.661374 kubelet[2558]: E0120 00:35:04.661227 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:04.661374 kubelet[2558]: E0120 00:35:04.661291 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:04.661893 kubelet[2558]: E0120 00:35:04.661425 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pffxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-t9x7r_calico-apiserver(0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:04.663005 kubelet[2558]: E0120 00:35:04.662959 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:35:06.567631 kubelet[2558]: E0120 00:35:06.567278 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:07.564702 kubelet[2558]: E0120 00:35:07.563677 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:35:08.297369 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Jan 20 00:35:08.388404 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:08.391134 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:08.398043 systemd-logind[1452]: New session 13 of user core. Jan 20 00:35:08.407755 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:35:08.574018 sshd[5498]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:08.580164 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:48410.service: Deactivated successfully. Jan 20 00:35:08.582754 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:35:08.583955 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:35:08.585680 systemd-logind[1452]: Removed session 13. Jan 20 00:35:09.564329 containerd[1468]: time="2026-01-20T00:35:09.564216554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:35:09.717883 containerd[1468]: time="2026-01-20T00:35:09.717645926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:09.720926 containerd[1468]: time="2026-01-20T00:35:09.720684405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:35:09.722690 containerd[1468]: time="2026-01-20T00:35:09.720878791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:35:09.722762 kubelet[2558]: E0120 00:35:09.721576 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:35:09.722762 kubelet[2558]: E0120 00:35:09.721755 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:35:09.722762 kubelet[2558]: E0120 00:35:09.722049 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aa4fe703c3ce4a2fad7a06c0824f3068,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:09.726025 containerd[1468]: time="2026-01-20T00:35:09.725977225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:35:09.792207 containerd[1468]: time="2026-01-20T00:35:09.792055550Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:09.794193 containerd[1468]: time="2026-01-20T00:35:09.794055746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:35:09.794193 containerd[1468]: time="2026-01-20T00:35:09.794151628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:35:09.794426 kubelet[2558]: E0120 00:35:09.794348 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:35:09.794426 kubelet[2558]: E0120 00:35:09.794410 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:35:09.794757 kubelet[2558]: E0120 00:35:09.794668 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:09.796756 kubelet[2558]: E0120 00:35:09.796358 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:35:10.564559 kubelet[2558]: E0120 00:35:10.564324 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:35:11.566242 containerd[1468]: time="2026-01-20T00:35:11.565850673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:35:11.629066 containerd[1468]: time="2026-01-20T00:35:11.628931545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:11.630894 containerd[1468]: time="2026-01-20T00:35:11.630722772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:35:11.630894 containerd[1468]: time="2026-01-20T00:35:11.630774962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:35:11.631126 kubelet[2558]: E0120 00:35:11.631057 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:11.631672 kubelet[2558]: E0120 00:35:11.631126 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:11.631672 kubelet[2558]: E0120 00:35:11.631362 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75rcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-68fs6_calico-apiserver(96693105-0319-44f2-a458-134dbd8dc9b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:11.632989 kubelet[2558]: E0120 00:35:11.632841 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:35:13.565810 containerd[1468]: time="2026-01-20T00:35:13.565614332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:35:13.601666 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:54580.service - OpenSSH per-connection server daemon (10.0.0.1:54580). Jan 20 00:35:13.654171 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 54580 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:13.657369 containerd[1468]: time="2026-01-20T00:35:13.657207395Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:13.657665 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:13.659343 containerd[1468]: time="2026-01-20T00:35:13.659212726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:35:13.659386 containerd[1468]: time="2026-01-20T00:35:13.659324966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:35:13.660184 kubelet[2558]: E0120 00:35:13.659789 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:35:13.660184 kubelet[2558]: E0120 00:35:13.659869 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:35:13.660184 kubelet[2558]: E0120 00:35:13.659975 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:13.662942 containerd[1468]: time="2026-01-20T00:35:13.662603751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:35:13.668389 systemd-logind[1452]: New session 14 of user core. Jan 20 00:35:13.679874 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:35:13.735893 containerd[1468]: time="2026-01-20T00:35:13.735702040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:13.737608 containerd[1468]: time="2026-01-20T00:35:13.737268872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:35:13.737778 containerd[1468]: time="2026-01-20T00:35:13.737601409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:35:13.738345 kubelet[2558]: E0120 00:35:13.738163 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:35:13.738345 kubelet[2558]: E0120 00:35:13.738319 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:35:13.738657 kubelet[2558]: E0120 00:35:13.738585 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:13.740124 kubelet[2558]: E0120 00:35:13.739982 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:35:13.819070 sshd[5522]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:13.828441 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:54580.service: Deactivated successfully. Jan 20 00:35:13.830337 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:35:13.832185 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:35:13.837032 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:54592.service - OpenSSH per-connection server daemon (10.0.0.1:54592). Jan 20 00:35:13.838958 systemd-logind[1452]: Removed session 14. Jan 20 00:35:13.879441 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 54592 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:13.881273 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:13.887596 systemd-logind[1452]: New session 15 of user core. Jan 20 00:35:13.899696 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:35:14.059647 sshd[5538]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:14.068323 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:54592.service: Deactivated successfully. Jan 20 00:35:14.070725 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:35:14.075239 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:35:14.087201 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:54604.service - OpenSSH per-connection server daemon (10.0.0.1:54604). Jan 20 00:35:14.090422 systemd-logind[1452]: Removed session 15. Jan 20 00:35:14.140085 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 54604 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:14.142712 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:14.149004 systemd-logind[1452]: New session 16 of user core. Jan 20 00:35:14.157755 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:35:14.290106 sshd[5550]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:14.296198 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:54604.service: Deactivated successfully. Jan 20 00:35:14.299376 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:35:14.300820 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:35:14.303098 systemd-logind[1452]: Removed session 16. Jan 20 00:35:17.564653 kubelet[2558]: E0120 00:35:17.564321 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:35:19.313970 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:54616.service - OpenSSH per-connection server daemon (10.0.0.1:54616). Jan 20 00:35:19.353249 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 54616 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:19.355785 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:19.362571 systemd-logind[1452]: New session 17 of user core. Jan 20 00:35:19.376898 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:35:19.538122 sshd[5564]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:19.543729 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:54616.service: Deactivated successfully. Jan 20 00:35:19.546851 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:35:19.549082 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:35:19.551363 systemd-logind[1452]: Removed session 17. Jan 20 00:35:20.575217 containerd[1468]: time="2026-01-20T00:35:20.575121363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:35:20.661050 containerd[1468]: time="2026-01-20T00:35:20.660842398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:20.664257 containerd[1468]: time="2026-01-20T00:35:20.663867280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:35:20.664257 containerd[1468]: time="2026-01-20T00:35:20.664096244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:35:20.664438 kubelet[2558]: E0120 00:35:20.664286 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:35:20.664438 kubelet[2558]: E0120 00:35:20.664352 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:35:20.665387 kubelet[2558]: E0120 00:35:20.664743 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9qvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-654778bb87-lw5jd_calico-system(a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:20.667331 kubelet[2558]: E0120 00:35:20.666851 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:35:21.585045 kubelet[2558]: E0120 00:35:21.584908 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:35:24.552178 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:42492.service - OpenSSH per-connection server daemon (10.0.0.1:42492). Jan 20 00:35:24.564088 containerd[1468]: time="2026-01-20T00:35:24.563964049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:35:24.614742 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 42492 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:24.617201 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:24.623989 systemd-logind[1452]: New session 18 of user core. Jan 20 00:35:24.628341 containerd[1468]: time="2026-01-20T00:35:24.628148253Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:24.630117 containerd[1468]: time="2026-01-20T00:35:24.630061735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:35:24.630184 containerd[1468]: time="2026-01-20T00:35:24.630152424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:35:24.630562 kubelet[2558]: E0120 00:35:24.630351 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:35:24.630562 kubelet[2558]: E0120 00:35:24.630455 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:35:24.632395 kubelet[2558]: E0120 00:35:24.630816 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt2mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bx6wj_calico-system(4fd51efe-cc95-4265-995a-08b13dbea3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:24.632131 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:35:24.632926 kubelet[2558]: E0120 00:35:24.632779 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:35:24.767083 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:24.771565 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:42492.service: Deactivated successfully. Jan 20 00:35:24.773741 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:35:24.774897 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:35:24.776787 systemd-logind[1452]: Removed session 18. Jan 20 00:35:25.563844 kubelet[2558]: E0120 00:35:25.563631 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:35:26.564288 kubelet[2558]: E0120 00:35:26.564103 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:35:29.783694 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:42498.service - OpenSSH per-connection server daemon (10.0.0.1:42498). Jan 20 00:35:29.829688 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 42498 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:29.832075 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:29.838132 systemd-logind[1452]: New session 19 of user core. Jan 20 00:35:29.856821 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:35:30.153550 sshd[5623]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:30.172668 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:42498.service: Deactivated successfully. Jan 20 00:35:30.204795 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:35:30.209609 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:35:30.221459 systemd-logind[1452]: Removed session 19. Jan 20 00:35:32.579884 kubelet[2558]: E0120 00:35:32.579707 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:35:33.368670 containerd[1468]: time="2026-01-20T00:35:33.368592818Z" level=info msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.426 [WARNING][5650] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0", GenerateName:"calico-kube-controllers-654778bb87-", Namespace:"calico-system", SelfLink:"", UID:"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c", ResourceVersion:"1441", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"654778bb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e", Pod:"calico-kube-controllers-654778bb87-lw5jd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a5df3b5f43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.426 [INFO][5650] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.426 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" iface="eth0" netns="" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.426 [INFO][5650] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.426 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.463 [INFO][5658] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.463 [INFO][5658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.463 [INFO][5658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.470 [WARNING][5658] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.470 [INFO][5658] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.473 [INFO][5658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:35:33.482168 containerd[1468]: 2026-01-20 00:35:33.476 [INFO][5650] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.482954 containerd[1468]: time="2026-01-20T00:35:33.482160303Z" level=info msg="TearDown network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" successfully" Jan 20 00:35:33.482954 containerd[1468]: time="2026-01-20T00:35:33.482202732Z" level=info msg="StopPodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" returns successfully" Jan 20 00:35:33.483075 containerd[1468]: time="2026-01-20T00:35:33.482976637Z" level=info msg="RemovePodSandbox for \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" Jan 20 00:35:33.483075 containerd[1468]: time="2026-01-20T00:35:33.483002745Z" level=info msg="Forcibly stopping sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\"" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.527 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0", GenerateName:"calico-kube-controllers-654778bb87-", Namespace:"calico-system", SelfLink:"", UID:"a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c", ResourceVersion:"1441", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"654778bb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e0247b73e153bd5b2bc37fb59a63e03590923ff421831fc496eae20f70ab25e", Pod:"calico-kube-controllers-654778bb87-lw5jd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a5df3b5f43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.528 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.528 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" iface="eth0" netns="" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.528 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.528 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.553 [INFO][5683] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.553 [INFO][5683] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.553 [INFO][5683] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.572 [WARNING][5683] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.572 [INFO][5683] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" HandleID="k8s-pod-network.fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Workload="localhost-k8s-calico--kube--controllers--654778bb87--lw5jd-eth0" Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.574 [INFO][5683] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:35:33.579884 containerd[1468]: 2026-01-20 00:35:33.577 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c" Jan 20 00:35:33.580456 containerd[1468]: time="2026-01-20T00:35:33.579979450Z" level=info msg="TearDown network for sandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" successfully" Jan 20 00:35:33.586254 containerd[1468]: time="2026-01-20T00:35:33.586151966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:35:33.586254 containerd[1468]: time="2026-01-20T00:35:33.586239390Z" level=info msg="RemovePodSandbox \"fa5347417455a16027c4c72da12bb3c15eaf0e12e5eab93898c136c5a174417c\" returns successfully" Jan 20 00:35:33.586860 containerd[1468]: time="2026-01-20T00:35:33.586815611Z" level=info msg="StopPodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\"" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.631 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b39ba6d-0875-4d35-90a8-c9d91492b367", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17", Pod:"coredns-668d6bf9bc-7zhgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali511405e0ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.631 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.631 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" iface="eth0" netns="" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.631 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.631 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.654 [INFO][5709] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.654 [INFO][5709] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.654 [INFO][5709] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.664 [WARNING][5709] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.664 [INFO][5709] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.666 [INFO][5709] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:35:33.671930 containerd[1468]: 2026-01-20 00:35:33.669 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.671930 containerd[1468]: time="2026-01-20T00:35:33.671869345Z" level=info msg="TearDown network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" successfully" Jan 20 00:35:33.671930 containerd[1468]: time="2026-01-20T00:35:33.671906424Z" level=info msg="StopPodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" returns successfully" Jan 20 00:35:33.672717 containerd[1468]: time="2026-01-20T00:35:33.672569140Z" level=info msg="RemovePodSandbox for \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\"" Jan 20 00:35:33.672717 containerd[1468]: time="2026-01-20T00:35:33.672608924Z" level=info msg="Forcibly stopping sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\"" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.719 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b39ba6d-0875-4d35-90a8-c9d91492b367", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 33, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c98e1e72720a6d34a7514702ba2375732503527ceb2c46f58b84805b89bf9e17", Pod:"coredns-668d6bf9bc-7zhgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali511405e0ca2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.720 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.720 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" iface="eth0" netns="" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.720 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.720 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.752 [INFO][5736] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.752 [INFO][5736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.752 [INFO][5736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.765 [WARNING][5736] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.765 [INFO][5736] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" HandleID="k8s-pod-network.79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Workload="localhost-k8s-coredns--668d6bf9bc--7zhgb-eth0" Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.768 [INFO][5736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:35:33.774162 containerd[1468]: 2026-01-20 00:35:33.770 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7" Jan 20 00:35:33.774620 containerd[1468]: time="2026-01-20T00:35:33.774179297Z" level=info msg="TearDown network for sandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" successfully" Jan 20 00:35:33.782429 containerd[1468]: time="2026-01-20T00:35:33.782357549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:35:33.782591 containerd[1468]: time="2026-01-20T00:35:33.782435825Z" level=info msg="RemovePodSandbox \"79cfb257135f37e45ff475e15fee22ea944370e8b0b40e7d922a685924ede8a7\" returns successfully" Jan 20 00:35:33.783128 containerd[1468]: time="2026-01-20T00:35:33.783099659Z" level=info msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.822 [WARNING][5754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bx6wj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4fd51efe-cc95-4265-995a-08b13dbea3b1", ResourceVersion:"1475", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9", Pod:"goldmane-666569f655-bx6wj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie62424d36d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.823 [INFO][5754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.823 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" iface="eth0" netns="" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.823 [INFO][5754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.823 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.856 [INFO][5762] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.856 [INFO][5762] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.856 [INFO][5762] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.863 [WARNING][5762] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.863 [INFO][5762] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.865 [INFO][5762] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:35:33.870591 containerd[1468]: 2026-01-20 00:35:33.868 [INFO][5754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.871142 containerd[1468]: time="2026-01-20T00:35:33.870637100Z" level=info msg="TearDown network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" successfully" Jan 20 00:35:33.871142 containerd[1468]: time="2026-01-20T00:35:33.870670483Z" level=info msg="StopPodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" returns successfully" Jan 20 00:35:33.871541 containerd[1468]: time="2026-01-20T00:35:33.871434479Z" level=info msg="RemovePodSandbox for \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" Jan 20 00:35:33.871608 containerd[1468]: time="2026-01-20T00:35:33.871558270Z" level=info msg="Forcibly stopping sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\"" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.921 [WARNING][5779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bx6wj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4fd51efe-cc95-4265-995a-08b13dbea3b1", ResourceVersion:"1475", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 34, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0d83d76ce596ffab3fc662f72db594c5efc2d9eef20b333f72cdad42011f0d9", Pod:"goldmane-666569f655-bx6wj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie62424d36d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.921 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.921 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" iface="eth0" netns="" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.921 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.921 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.953 [INFO][5787] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.953 [INFO][5787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.953 [INFO][5787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.971 [WARNING][5787] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.971 [INFO][5787] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" HandleID="k8s-pod-network.a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Workload="localhost-k8s-goldmane--666569f655--bx6wj-eth0" Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.973 [INFO][5787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:35:33.979401 containerd[1468]: 2026-01-20 00:35:33.976 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e" Jan 20 00:35:33.979401 containerd[1468]: time="2026-01-20T00:35:33.979324190Z" level=info msg="TearDown network for sandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" successfully" Jan 20 00:35:33.987346 containerd[1468]: time="2026-01-20T00:35:33.987234445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:35:33.987346 containerd[1468]: time="2026-01-20T00:35:33.987337968Z" level=info msg="RemovePodSandbox \"a4614b663d92aae9bae41baaef643dad75a5c010c6b2db4f0333eb0f0e77dc5e\" returns successfully" Jan 20 00:35:35.171918 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:33254.service - OpenSSH per-connection server daemon (10.0.0.1:33254). Jan 20 00:35:35.215240 sshd[5796]: Accepted publickey for core from 10.0.0.1 port 33254 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:35.217268 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:35.223988 systemd-logind[1452]: New session 20 of user core. Jan 20 00:35:35.230874 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:35:35.382661 sshd[5796]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:35.386745 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:33254.service: Deactivated successfully. Jan 20 00:35:35.389062 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:35:35.390099 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:35:35.392050 systemd-logind[1452]: Removed session 20. Jan 20 00:35:35.564660 kubelet[2558]: E0120 00:35:35.562718 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:35.564660 kubelet[2558]: E0120 00:35:35.563926 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:35:36.564328 kubelet[2558]: E0120 00:35:36.564258 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:35:37.571947 kubelet[2558]: E0120 00:35:37.571674 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:35:38.563438 kubelet[2558]: E0120 00:35:38.562900 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:39.564153 kubelet[2558]: E0120 00:35:39.564026 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:35:40.416010 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:33258.service - OpenSSH per-connection server daemon (10.0.0.1:33258). Jan 20 00:35:40.453976 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 33258 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:40.455722 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:40.461704 systemd-logind[1452]: New session 21 of user core. Jan 20 00:35:40.474813 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:35:40.565531 kubelet[2558]: E0120 00:35:40.565385 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:35:40.685154 sshd[5812]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:40.695134 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:33258.service: Deactivated successfully. Jan 20 00:35:40.697771 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:35:40.700681 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:35:40.708080 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:33262.service - OpenSSH per-connection server daemon (10.0.0.1:33262). Jan 20 00:35:40.709413 systemd-logind[1452]: Removed session 21. Jan 20 00:35:40.764328 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 33262 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:40.766922 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:40.774784 systemd-logind[1452]: New session 22 of user core. Jan 20 00:35:40.784942 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:35:41.197970 sshd[5826]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:41.207417 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:33262.service: Deactivated successfully. Jan 20 00:35:41.210877 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:35:41.213728 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:35:41.220438 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:33270.service - OpenSSH per-connection server daemon (10.0.0.1:33270). Jan 20 00:35:41.222424 systemd-logind[1452]: Removed session 22. Jan 20 00:35:41.276954 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 33270 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:41.279208 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:41.286838 systemd-logind[1452]: New session 23 of user core. Jan 20 00:35:41.295857 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:35:41.563278 kubelet[2558]: E0120 00:35:41.563186 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:42.234320 sshd[5838]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:42.253036 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:33282.service - OpenSSH per-connection server daemon (10.0.0.1:33282). Jan 20 00:35:42.253884 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:33270.service: Deactivated successfully. Jan 20 00:35:42.259198 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:35:42.281170 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:35:42.290308 systemd-logind[1452]: Removed session 23. Jan 20 00:35:42.357989 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 33282 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:42.361261 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:42.386472 systemd-logind[1452]: New session 24 of user core. Jan 20 00:35:42.396840 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:35:42.800300 sshd[5873]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:42.812045 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:33282.service: Deactivated successfully. Jan 20 00:35:42.814352 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:35:42.816180 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:35:42.823970 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:38808.service - OpenSSH per-connection server daemon (10.0.0.1:38808). Jan 20 00:35:42.825993 systemd-logind[1452]: Removed session 24. Jan 20 00:35:42.862128 sshd[5887]: Accepted publickey for core from 10.0.0.1 port 38808 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:42.864884 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:42.886026 systemd-logind[1452]: New session 25 of user core. Jan 20 00:35:42.898826 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:35:43.042047 sshd[5887]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:43.047926 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:38808.service: Deactivated successfully. Jan 20 00:35:43.050662 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:35:43.051794 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:35:43.053628 systemd-logind[1452]: Removed session 25. Jan 20 00:35:45.567009 containerd[1468]: time="2026-01-20T00:35:45.566968618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:35:45.645744 containerd[1468]: time="2026-01-20T00:35:45.645667379Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:45.647589 containerd[1468]: time="2026-01-20T00:35:45.647379283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:35:45.647589 containerd[1468]: time="2026-01-20T00:35:45.647450064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:35:45.648028 kubelet[2558]: E0120 00:35:45.647896 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:45.648028 kubelet[2558]: E0120 00:35:45.648015 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:45.648695 kubelet[2558]: E0120 00:35:45.648185 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pffxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-t9x7r_calico-apiserver(0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:45.649863 kubelet[2558]: E0120 00:35:45.649711 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:35:48.061316 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Jan 20 00:35:48.118311 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:48.120167 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:48.125910 systemd-logind[1452]: New session 26 of user core. Jan 20 00:35:48.140849 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:35:48.260912 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:48.266354 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:38822.service: Deactivated successfully. Jan 20 00:35:48.268983 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:35:48.269957 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:35:48.271638 systemd-logind[1452]: Removed session 26. Jan 20 00:35:49.572920 kubelet[2558]: E0120 00:35:49.572792 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:35:49.573698 kubelet[2558]: E0120 00:35:49.573048 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:35:51.575433 kubelet[2558]: E0120 00:35:51.575131 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:35:52.583285 containerd[1468]: time="2026-01-20T00:35:52.583088254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:35:52.592638 kubelet[2558]: E0120 00:35:52.588294 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:35:52.703388 containerd[1468]: time="2026-01-20T00:35:52.702628808Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:35:52.708033 containerd[1468]: time="2026-01-20T00:35:52.707707153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:35:52.708033 containerd[1468]: time="2026-01-20T00:35:52.707934507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:35:52.710132 kubelet[2558]: E0120 00:35:52.709614 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:52.710607 kubelet[2558]: E0120 00:35:52.710316 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:35:52.711016 kubelet[2558]: E0120 00:35:52.710950 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75rcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78c5dffbd-68fs6_calico-apiserver(96693105-0319-44f2-a458-134dbd8dc9b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:35:52.712562 kubelet[2558]: E0120 00:35:52.712433 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8" Jan 20 00:35:53.282940 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:51282.service - OpenSSH per-connection server daemon (10.0.0.1:51282). Jan 20 00:35:53.336763 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 51282 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:53.340788 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:53.371730 systemd-logind[1452]: New session 27 of user core. Jan 20 00:35:53.377007 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 00:35:53.617008 sshd[5923]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:53.628124 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:51282.service: Deactivated successfully. Jan 20 00:35:53.634611 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 00:35:53.637107 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Jan 20 00:35:53.640883 systemd-logind[1452]: Removed session 27. Jan 20 00:35:58.563968 kubelet[2558]: E0120 00:35:58.563218 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-t9x7r" podUID="0d6e1086-0ac8-4c92-bb35-cbe08d4a2e84" Jan 20 00:35:58.633096 systemd[1]: Started sshd@27-10.0.0.10:22-10.0.0.1:51294.service - OpenSSH per-connection server daemon (10.0.0.1:51294). Jan 20 00:35:58.701091 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 51294 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:35:58.703255 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:35:58.710363 systemd-logind[1452]: New session 28 of user core. Jan 20 00:35:58.716790 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 00:35:58.867731 sshd[5959]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:58.875399 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Jan 20 00:35:58.878153 systemd[1]: sshd@27-10.0.0.10:22-10.0.0.1:51294.service: Deactivated successfully. Jan 20 00:35:58.884760 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 00:35:58.890797 systemd-logind[1452]: Removed session 28. Jan 20 00:35:59.563039 kubelet[2558]: E0120 00:35:59.563000 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:59.563466 kubelet[2558]: E0120 00:35:59.563195 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:00.563951 kubelet[2558]: E0120 00:36:00.563859 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654778bb87-lw5jd" podUID="a0b6979d-ad60-4bcc-b38c-f806a4b1dd2c" Jan 20 00:36:01.568735 containerd[1468]: time="2026-01-20T00:36:01.568661865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:36:01.633845 containerd[1468]: time="2026-01-20T00:36:01.633789911Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:36:01.635413 containerd[1468]: time="2026-01-20T00:36:01.635345535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:36:01.635531 containerd[1468]: time="2026-01-20T00:36:01.635446654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:36:01.635820 kubelet[2558]: E0120 00:36:01.635732 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:36:01.635820 kubelet[2558]: E0120 00:36:01.635796 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:36:01.636788 kubelet[2558]: E0120 00:36:01.635934 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:aa4fe703c3ce4a2fad7a06c0824f3068,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:36:01.640254 containerd[1468]: time="2026-01-20T00:36:01.639967990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:36:01.699241 containerd[1468]: time="2026-01-20T00:36:01.699173507Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:36:01.701010 containerd[1468]: time="2026-01-20T00:36:01.700919746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:36:01.701102 containerd[1468]: time="2026-01-20T00:36:01.701026295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:36:01.701756 kubelet[2558]: E0120 00:36:01.701460 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:36:01.701756 kubelet[2558]: E0120 00:36:01.701664 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:36:01.701885 kubelet[2558]: E0120 00:36:01.701833 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64vzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77dcbc58d8-fnbv4_calico-system(1e558f7e-555f-414d-86be-1ebe08b27e55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:36:01.703308 kubelet[2558]: E0120 00:36:01.703218 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77dcbc58d8-fnbv4" podUID="1e558f7e-555f-414d-86be-1ebe08b27e55" Jan 20 00:36:02.563849 kubelet[2558]: E0120 00:36:02.563745 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bx6wj" podUID="4fd51efe-cc95-4265-995a-08b13dbea3b1" Jan 20 00:36:03.938137 systemd[1]: Started sshd@28-10.0.0.10:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). Jan 20 00:36:04.014226 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:04.016772 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:04.023134 systemd-logind[1452]: New session 29 of user core. Jan 20 00:36:04.040090 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 00:36:04.299181 sshd[5996]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:04.310699 systemd[1]: sshd@28-10.0.0.10:22-10.0.0.1:33840.service: Deactivated successfully. Jan 20 00:36:04.317369 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 00:36:04.321291 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Jan 20 00:36:04.323248 systemd-logind[1452]: Removed session 29. Jan 20 00:36:04.566928 containerd[1468]: time="2026-01-20T00:36:04.566395160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:36:04.661977 containerd[1468]: time="2026-01-20T00:36:04.661311474Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:36:04.665945 containerd[1468]: time="2026-01-20T00:36:04.665768523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:36:04.666104 containerd[1468]: time="2026-01-20T00:36:04.665944762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:36:04.666335 kubelet[2558]: E0120 00:36:04.666166 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:36:04.666335 kubelet[2558]: E0120 00:36:04.666276 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:36:04.667034 kubelet[2558]: E0120 00:36:04.666448 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:36:04.670202 containerd[1468]: time="2026-01-20T00:36:04.670110204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:36:04.757630 containerd[1468]: time="2026-01-20T00:36:04.756943043Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:36:04.762121 containerd[1468]: time="2026-01-20T00:36:04.761982085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:36:04.762303 containerd[1468]: time="2026-01-20T00:36:04.762125072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:36:04.762652 kubelet[2558]: E0120 00:36:04.762446 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:36:04.763004 kubelet[2558]: E0120 00:36:04.762654 2558 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:36:04.763004 kubelet[2558]: E0120 00:36:04.762858 2558 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg7pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wkvnv_calico-system(2d7f8729-92e8-466b-ac93-b93fcaadeb7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:36:04.764208 kubelet[2558]: E0120 00:36:04.764152 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wkvnv" podUID="2d7f8729-92e8-466b-ac93-b93fcaadeb7a" Jan 20 00:36:05.603196 kubelet[2558]: E0120 00:36:05.603017 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78c5dffbd-68fs6" podUID="96693105-0319-44f2-a458-134dbd8dc9b8"