Jan 20 00:45:31.071189 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:45:31.071355 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:45:31.071382 kernel: BIOS-provided physical RAM map: Jan 20 00:45:31.071393 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 00:45:31.071403 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 00:45:31.071413 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 00:45:31.071426 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 00:45:31.071436 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 00:45:31.071446 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 20 00:45:31.071457 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 20 00:45:31.071473 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 20 00:45:31.071484 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 20 00:45:31.071524 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 20 00:45:31.071536 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 20 00:45:31.071599 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 20 00:45:31.071613 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 00:45:31.071632 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 20 00:45:31.071644 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 20 00:45:31.071655 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 00:45:31.071667 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:45:31.071678 kernel: NX (Execute Disable) protection: active Jan 20 00:45:31.071690 kernel: APIC: Static calls initialized Jan 20 00:45:31.071701 kernel: efi: EFI v2.7 by EDK II Jan 20 00:45:31.071712 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 20 00:45:31.071724 kernel: SMBIOS 2.8 present. Jan 20 00:45:31.071736 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 20 00:45:31.071747 kernel: Hypervisor detected: KVM Jan 20 00:45:31.071765 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:45:31.071777 kernel: kvm-clock: using sched offset of 18564491170 cycles Jan 20 00:45:31.071790 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:45:31.071801 kernel: tsc: Detected 2445.426 MHz processor Jan 20 00:45:31.071814 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:45:31.071893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:45:31.071909 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 20 00:45:31.071922 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 00:45:31.071934 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:45:31.071953 kernel: Using GB pages for direct mapping Jan 20 00:45:31.071965 kernel: Secure boot disabled Jan 20 00:45:31.071977 kernel: ACPI: Early table checksum verification disabled Jan 20 00:45:31.071990 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 00:45:31.072009 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 00:45:31.072021 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:45:31.072034 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:45:31.072052 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 00:45:31.072064 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:45:31.072106 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:45:31.072120 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:45:31.072132 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:45:31.072143 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 00:45:31.072155 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 00:45:31.072173 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 00:45:31.072185 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 00:45:31.072197 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 00:45:31.072209 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 00:45:31.072220 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 00:45:31.072231 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 00:45:31.072243 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 00:45:31.072254 kernel: No NUMA configuration found Jan 20 00:45:31.072296 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 20 00:45:31.072315 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 20 00:45:31.072327 kernel: Zone ranges: Jan 20 00:45:31.072339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:45:31.072351 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 20 00:45:31.072362 kernel: Normal empty Jan 20 00:45:31.072374 kernel: Movable zone start for each node Jan 20 00:45:31.072385 kernel: Early memory node ranges Jan 20 00:45:31.072397 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 00:45:31.072409 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 00:45:31.072427 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 00:45:31.072439 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 20 00:45:31.072451 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 20 00:45:31.072462 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 20 00:45:31.072508 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 20 00:45:31.072523 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:45:31.072536 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 00:45:31.072585 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 00:45:31.072598 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:45:31.072610 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 20 00:45:31.072629 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 00:45:31.072642 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 20 00:45:31.072654 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:45:31.072666 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:45:31.072677 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:45:31.072689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:45:31.072701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:45:31.072712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:45:31.072724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:45:31.072744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:45:31.072756 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:45:31.072767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:45:31.072780 kernel: TSC deadline timer available Jan 20 00:45:31.072791 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:45:31.072802 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:45:31.072814 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:45:31.072893 kernel: kvm-guest: setup PV sched yield Jan 20 00:45:31.072910 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 20 00:45:31.072931 kernel: Booting paravirtualized kernel on KVM Jan 20 00:45:31.072944 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:45:31.072957 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:45:31.072969 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:45:31.072981 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:45:31.072993 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:45:31.073003 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:45:31.073015 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:45:31.073029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:45:31.073080 kernel: random: crng init done Jan 20 00:45:31.073093 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:45:31.073105 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:45:31.073118 kernel: Fallback order for Node 0: 0 Jan 20 00:45:31.073130 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 20 00:45:31.073141 kernel: Policy zone: DMA32 Jan 20 00:45:31.073153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:45:31.073166 kernel: Memory: 2400612K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 166128K reserved, 0K cma-reserved) Jan 20 00:45:31.073185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:45:31.073197 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:45:31.073209 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:45:31.073221 kernel: Dynamic Preempt: voluntary Jan 20 00:45:31.073233 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:45:31.073261 kernel: rcu: RCU event tracing is enabled. Jan 20 00:45:31.073278 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:45:31.073290 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:45:31.073303 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:45:31.073317 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:45:31.073329 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:45:31.073342 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:45:31.073362 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:45:31.073375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:45:31.073388 kernel: Console: colour dummy device 80x25 Jan 20 00:45:31.073401 kernel: printk: console [ttyS0] enabled Jan 20 00:45:31.073449 kernel: ACPI: Core revision 20230628 Jan 20 00:45:31.073473 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:45:31.073486 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:45:31.073499 kernel: x2apic enabled Jan 20 00:45:31.073511 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:45:31.073523 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:45:31.073536 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:45:31.073588 kernel: kvm-guest: setup PV IPIs Jan 20 00:45:31.073602 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:45:31.073614 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:45:31.073634 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 00:45:31.073647 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:45:31.073660 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:45:31.073673 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:45:31.073686 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:45:31.073699 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:45:31.073712 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:45:31.073725 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:45:31.073737 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:45:31.073758 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:45:31.073771 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:45:31.073783 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:45:31.073795 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:45:31.073878 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:45:31.073897 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:45:31.073911 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:45:31.073925 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:45:31.073947 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:45:31.073961 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:45:31.073974 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:45:31.073987 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:45:31.074000 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:45:31.074013 kernel: landlock: Up and running. Jan 20 00:45:31.074025 kernel: SELinux: Initializing. Jan 20 00:45:31.074038 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:45:31.074051 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:45:31.074071 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:45:31.074084 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:45:31.074098 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:45:31.074110 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:45:31.074122 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:45:31.074135 kernel: signal: max sigframe size: 1776 Jan 20 00:45:31.074148 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:45:31.074162 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:45:31.074175 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:45:31.074195 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:45:31.074208 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:45:31.074221 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:45:31.074234 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:45:31.074246 kernel: smpboot: Max logical packages: 1 Jan 20 00:45:31.074257 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 00:45:31.074269 kernel: devtmpfs: initialized Jan 20 00:45:31.074281 kernel: x86/mm: Memory block size: 128MB Jan 20 00:45:31.074293 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 00:45:31.074313 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 00:45:31.074326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 20 00:45:31.074338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 00:45:31.074349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 00:45:31.074361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:45:31.074373 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:45:31.074385 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:45:31.074397 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:45:31.074409 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:45:31.074427 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:45:31.074440 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:45:31.074453 kernel: audit: type=2000 audit(1768869922.759:1): state=initialized audit_enabled=0 res=1 Jan 20 00:45:31.074465 kernel: cpuidle: using governor menu Jan 20 00:45:31.074477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:45:31.074489 kernel: dca service started, version 1.12.1 Jan 20 00:45:31.074535 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:45:31.077600 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:45:31.077626 kernel: PCI: Using configuration type 1 for base access Jan 20 00:45:31.077641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:45:31.077653 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:45:31.077666 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:45:31.077679 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:45:31.077691 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:45:31.077703 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:45:31.077715 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:45:31.077727 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:45:31.077746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:45:31.077760 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:45:31.077773 kernel: ACPI: Interpreter enabled Jan 20 00:45:31.077785 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:45:31.077799 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:45:31.077812 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:45:31.078057 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:45:31.078076 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:45:31.078089 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:45:31.078815 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:45:31.079302 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:45:31.079530 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:45:31.079590 kernel: PCI host bridge to bus 0000:00 Jan 20 00:45:31.080059 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:45:31.080268 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:45:31.080465 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:45:31.080746 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:45:31.081131 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:45:31.081431 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 20 00:45:31.083189 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:45:31.083414 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:45:31.083721 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:45:31.084111 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 20 00:45:31.084353 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 20 00:45:31.084716 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 20 00:45:31.085080 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 20 00:45:31.085323 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:45:31.086007 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:45:31.086255 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 20 00:45:31.086514 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 20 00:45:31.089093 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 20 00:45:31.089365 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:45:31.089647 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 20 00:45:31.090005 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 20 00:45:31.093405 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 20 00:45:31.093974 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:45:31.094270 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 20 00:45:31.094512 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 20 00:45:31.098968 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 20 00:45:31.099401 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 20 00:45:31.099651 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:45:31.099898 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:45:31.100103 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:45:31.100308 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 20 00:45:31.100491 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 20 00:45:31.100725 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:45:31.100969 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 20 00:45:31.100986 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:45:31.100998 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:45:31.101008 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:45:31.101026 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:45:31.101037 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:45:31.101048 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:45:31.101059 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:45:31.101069 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:45:31.101080 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:45:31.101091 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:45:31.101102 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:45:31.101112 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:45:31.101127 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:45:31.101137 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:45:31.101148 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:45:31.101159 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:45:31.101170 kernel: iommu: Default domain type: Translated Jan 20 00:45:31.101180 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:45:31.101191 kernel: efivars: Registered efivars operations Jan 20 00:45:31.101202 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:45:31.101213 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:45:31.101227 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 00:45:31.101238 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 20 00:45:31.101248 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 20 00:45:31.101258 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 20 00:45:31.101443 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:45:31.104091 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:45:31.105182 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:45:31.105202 kernel: vgaarb: loaded Jan 20 00:45:31.105225 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:45:31.105237 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:45:31.105247 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:45:31.105258 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:45:31.105270 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:45:31.105280 kernel: pnp: PnP ACPI init Jan 20 00:45:31.105685 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:45:31.105708 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:45:31.105721 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:45:31.105742 kernel: NET: Registered PF_INET protocol family Jan 20 00:45:31.105754 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:45:31.105765 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:45:31.105776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:45:31.105789 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:45:31.105802 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:45:31.105814 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:45:31.105879 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:45:31.105898 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:45:31.105910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:45:31.105922 kernel: NET: Registered PF_XDP protocol family Jan 20 00:45:31.106146 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 20 00:45:31.106375 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 20 00:45:31.106636 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:45:31.106820 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:45:31.111052 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:45:31.112374 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:45:31.112619 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:45:31.113606 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 20 00:45:31.113633 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:45:31.113648 kernel: Initialise system trusted keyrings Jan 20 00:45:31.113661 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:45:31.113676 kernel: Key type asymmetric registered Jan 20 00:45:31.113688 kernel: Asymmetric key parser 'x509' registered Jan 20 00:45:31.113701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:45:31.113725 kernel: io scheduler mq-deadline registered Jan 20 00:45:31.113740 kernel: io scheduler kyber registered Jan 20 00:45:31.113752 kernel: io scheduler bfq registered Jan 20 00:45:31.113765 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:45:31.113780 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:45:31.113794 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:45:31.113808 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:45:31.113821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:45:31.113880 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:45:31.113901 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:45:31.113914 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:45:31.113928 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:45:31.114248 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:45:31.114483 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:45:31.114739 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:45:28 UTC (1768869928) Jan 20 00:45:31.115033 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:45:31.115055 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:45:31.115078 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:45:31.115092 kernel: efifb: probing for efifb Jan 20 00:45:31.115106 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 20 00:45:31.115118 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 20 00:45:31.115131 kernel: efifb: scrolling: redraw Jan 20 00:45:31.115144 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 20 00:45:31.115158 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 00:45:31.115170 kernel: fb0: EFI VGA frame buffer device Jan 20 00:45:31.115184 kernel: pstore: Using crash dump compression: deflate Jan 20 00:45:31.115203 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 00:45:31.115216 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:45:31.115229 kernel: Segment Routing with IPv6 Jan 20 00:45:31.115242 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:45:31.115255 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:45:31.115268 kernel: Key type dns_resolver registered Jan 20 00:45:31.115281 kernel: IPI shorthand broadcast: enabled Jan 20 00:45:31.115323 kernel: sched_clock: Marking stable (5718021797, 595271091)->(7948633479, -1635340591) Jan 20 00:45:31.115339 kernel: registered taskstats version 1 Jan 20 00:45:31.115356 kernel: Loading compiled-in X.509 certificates Jan 20 00:45:31.115368 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:45:31.115380 kernel: Key type .fscrypt registered Jan 20 00:45:31.115392 kernel: Key type fscrypt-provisioning registered Jan 20 00:45:31.115404 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:45:31.115417 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:45:31.115429 kernel: ima: No architecture policies found Jan 20 00:45:31.115441 kernel: clk: Disabling unused clocks Jan 20 00:45:31.115454 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:45:31.115472 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:45:31.115485 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:45:31.115497 kernel: Run /init as init process Jan 20 00:45:31.115510 kernel: with arguments: Jan 20 00:45:31.115522 kernel: /init Jan 20 00:45:31.115534 kernel: with environment: Jan 20 00:45:31.115589 kernel: HOME=/ Jan 20 00:45:31.115604 kernel: TERM=linux Jan 20 00:45:31.115618 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:45:31.115638 systemd[1]: Detected virtualization kvm. Jan 20 00:45:31.115650 systemd[1]: Detected architecture x86-64. Jan 20 00:45:31.115662 systemd[1]: Running in initrd. Jan 20 00:45:31.115673 systemd[1]: No hostname configured, using default hostname. Jan 20 00:45:31.115685 systemd[1]: Hostname set to . Jan 20 00:45:31.115699 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:45:31.115711 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:45:31.115726 kernel: hrtimer: interrupt took 6659443 ns Jan 20 00:45:31.115738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:45:31.115750 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:45:31.115762 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:45:31.115774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:45:31.115786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:45:31.115802 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:45:31.115816 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:45:31.115869 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:45:31.115881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:45:31.115893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:45:31.115910 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:45:31.115921 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:45:31.115933 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:45:31.115944 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:45:31.115956 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:45:31.115968 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:45:31.115979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:45:31.115991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:45:31.116003 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:45:31.116018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:45:31.116030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:45:31.116041 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:45:31.116053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:45:31.116065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:45:31.116077 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:45:31.116088 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:45:31.116100 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:45:31.116111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:45:31.116127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:45:31.116138 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:45:31.116189 systemd-journald[195]: Collecting audit messages is disabled. Jan 20 00:45:31.119645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:45:31.119672 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:45:31.119686 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:45:31.119701 systemd-journald[195]: Journal started Jan 20 00:45:31.119731 systemd-journald[195]: Runtime Journal (/run/log/journal/53980ab28ee84b6a84accb9d4f2a08c1) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:45:31.132930 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:45:31.150794 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:45:31.167567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:45:31.186766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:45:31.213967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:45:31.224110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:45:31.261492 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:45:31.279296 systemd-modules-load[196]: Inserted module 'overlay' Jan 20 00:45:31.297990 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:45:31.310686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:45:31.348908 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:45:31.419966 dracut-cmdline[223]: dracut-dracut-053 Jan 20 00:45:31.435741 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:45:31.441519 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:45:31.479959 kernel: Bridge firewalling registered Jan 20 00:45:31.485206 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 20 00:45:31.499319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:45:31.530681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:45:31.596287 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:45:31.615056 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:45:31.832535 systemd-resolved[268]: Positive Trust Anchors: Jan 20 00:45:31.833517 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:45:31.836770 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:45:31.873255 systemd-resolved[268]: Defaulting to hostname 'linux'. Jan 20 00:45:31.922995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:45:31.946475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:45:31.988993 kernel: SCSI subsystem initialized Jan 20 00:45:32.007710 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:45:32.047058 kernel: iscsi: registered transport (tcp) Jan 20 00:45:32.105343 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:45:32.105597 kernel: QLogic iSCSI HBA Driver Jan 20 00:45:32.414221 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:45:32.462987 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:45:32.636190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:45:32.636386 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:45:32.636418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:45:32.777226 kernel: raid6: avx2x4 gen() 16410 MB/s Jan 20 00:45:32.797406 kernel: raid6: avx2x2 gen() 15695 MB/s Jan 20 00:45:32.819716 kernel: raid6: avx2x1 gen() 11312 MB/s Jan 20 00:45:32.819805 kernel: raid6: using algorithm avx2x4 gen() 16410 MB/s Jan 20 00:45:32.844036 kernel: raid6: .... xor() 4551 MB/s, rmw enabled Jan 20 00:45:32.844128 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:45:32.916778 kernel: xor: automatically using best checksumming function avx Jan 20 00:45:33.418046 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:45:33.481747 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:45:33.502246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:45:33.556543 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 20 00:45:33.577421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:45:33.637255 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:45:33.714982 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 20 00:45:33.823505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:45:33.859395 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:45:34.136396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:45:34.190473 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:45:34.237922 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:45:34.260428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:45:34.272670 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:45:34.279909 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:45:34.313071 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:45:34.344929 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:45:34.345128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:45:34.347512 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:45:34.350554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:45:34.350892 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:45:34.354043 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:45:34.372673 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:45:34.433007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:45:34.478951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:45:34.479472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:45:34.502191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:45:34.509631 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:45:34.562928 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:45:34.586905 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:45:34.589859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:45:34.639448 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:45:34.639519 kernel: GPT:9289727 != 19775487 Jan 20 00:45:34.639542 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:45:34.639604 kernel: GPT:9289727 != 19775487 Jan 20 00:45:34.639634 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:45:34.639653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:45:34.663016 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:45:34.714085 kernel: libata version 3.00 loaded. Jan 20 00:45:34.734278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:45:34.840643 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:45:34.893196 kernel: AES CTR mode by8 optimization enabled Jan 20 00:45:34.893260 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (477) Jan 20 00:45:34.903344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:45:34.940185 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:45:34.940466 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:45:34.940486 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:45:34.940753 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:45:34.941036 kernel: scsi host0: ahci Jan 20 00:45:34.941733 kernel: scsi host1: ahci Jan 20 00:45:34.946337 kernel: scsi host2: ahci Jan 20 00:45:34.951888 kernel: scsi host3: ahci Jan 20 00:45:34.976893 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Jan 20 00:45:34.976960 kernel: scsi host4: ahci Jan 20 00:45:34.977273 kernel: scsi host5: ahci Jan 20 00:45:34.977551 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 20 00:45:34.980716 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 20 00:45:34.991049 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 20 00:45:34.991130 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 20 00:45:34.991153 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 20 00:45:34.983677 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:45:35.022470 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 20 00:45:35.039336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:45:35.047206 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:45:35.086721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:45:35.118171 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:45:35.150748 disk-uuid[581]: Primary Header is updated. Jan 20 00:45:35.150748 disk-uuid[581]: Secondary Entries is updated. Jan 20 00:45:35.150748 disk-uuid[581]: Secondary Header is updated. Jan 20 00:45:35.178504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:45:35.200975 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:45:35.327893 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:45:35.328003 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:45:35.345895 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:45:35.345968 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:45:35.366995 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:45:35.380216 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:45:35.380287 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:45:35.380311 kernel: ata3.00: applying bridge limits Jan 20 00:45:35.391321 kernel: ata3.00: configured for UDMA/100 Jan 20 00:45:35.402723 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:45:35.564639 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:45:35.565360 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:45:35.584910 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:45:36.226725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:45:36.236703 disk-uuid[584]: The operation has completed successfully. Jan 20 00:45:36.399455 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:45:36.402752 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:45:36.482075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:45:36.506298 sh[600]: Success Jan 20 00:45:36.608153 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:45:36.787055 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:45:36.824893 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:45:36.835276 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:45:36.922716 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:45:36.922808 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:45:36.922895 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:45:36.928970 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:45:36.935931 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:45:36.998183 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:45:37.001425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:45:37.044359 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:45:37.067249 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:45:37.145804 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:45:37.145945 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:45:37.145968 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:45:37.176944 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:45:37.229447 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:45:37.241717 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:45:37.285214 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:45:37.316370 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:45:37.647426 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:45:37.673982 ignition[710]: Ignition 2.19.0 Jan 20 00:45:37.674570 ignition[710]: Stage: fetch-offline Jan 20 00:45:37.674709 ignition[710]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:45:37.674734 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:45:37.674979 ignition[710]: parsed url from cmdline: "" Jan 20 00:45:37.674987 ignition[710]: no config URL provided Jan 20 00:45:37.674998 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:45:37.675018 ignition[710]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:45:37.708305 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:45:37.675147 ignition[710]: op(1): [started] loading QEMU firmware config module Jan 20 00:45:37.675158 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:45:37.720516 ignition[710]: op(1): [finished] loading QEMU firmware config module Jan 20 00:45:37.720632 ignition[710]: QEMU firmware config was not found. Ignoring... Jan 20 00:45:37.838786 systemd-networkd[789]: lo: Link UP Jan 20 00:45:37.838818 systemd-networkd[789]: lo: Gained carrier Jan 20 00:45:37.843543 systemd-networkd[789]: Enumeration completed Jan 20 00:45:37.845406 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:45:37.846440 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:45:37.846447 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:45:37.855296 systemd-networkd[789]: eth0: Link UP Jan 20 00:45:37.855304 systemd-networkd[789]: eth0: Gained carrier Jan 20 00:45:37.855324 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:45:37.896771 systemd[1]: Reached target network.target - Network. Jan 20 00:45:38.019190 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:45:38.244743 ignition[710]: parsing config with SHA512: 10d34ea30bdbd5f82869b888e1bd6a931d7334bf1a88b6d45c3e2f5bb6730c4976e82616860c4d37fccdb6fd0685dc781efe48406a0b76c58b96b2429d3c6940 Jan 20 00:45:38.281248 unknown[710]: fetched base config from "system" Jan 20 00:45:38.282355 ignition[710]: fetch-offline: fetch-offline passed Jan 20 00:45:38.281280 unknown[710]: fetched user config from "qemu" Jan 20 00:45:38.282618 ignition[710]: Ignition finished successfully Jan 20 00:45:38.303087 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:45:38.340308 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:45:38.394293 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:45:38.537797 ignition[793]: Ignition 2.19.0 Jan 20 00:45:38.541040 ignition[793]: Stage: kargs Jan 20 00:45:38.541351 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:45:38.541373 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:45:38.574787 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:45:38.552354 ignition[793]: kargs: kargs passed Jan 20 00:45:38.552452 ignition[793]: Ignition finished successfully Jan 20 00:45:38.621897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:45:38.749916 ignition[801]: Ignition 2.19.0 Jan 20 00:45:38.749957 ignition[801]: Stage: disks Jan 20 00:45:38.750303 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:45:38.750322 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:45:38.756002 ignition[801]: disks: disks passed Jan 20 00:45:38.756123 ignition[801]: Ignition finished successfully Jan 20 00:45:38.777041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:45:38.810634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:45:38.821302 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:45:38.835982 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:45:38.841182 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:45:38.851013 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:45:38.910981 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:45:38.947065 systemd-networkd[789]: eth0: Gained IPv6LL Jan 20 00:45:38.982444 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:45:39.005570 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:45:39.035539 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:45:39.375932 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:45:39.382115 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:45:39.394950 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:45:39.430993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:45:39.450508 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:45:39.473548 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:45:39.520053 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Jan 20 00:45:39.520102 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:45:39.520129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:45:39.520149 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:45:39.473714 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:45:39.503053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:45:39.535407 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:45:39.588304 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:45:39.592165 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:45:39.631693 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:45:39.857421 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:45:39.899804 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:45:39.926498 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:45:39.951306 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:45:40.293464 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:45:40.317228 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:45:40.337169 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:45:40.363033 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:45:40.380319 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:45:40.499786 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:45:40.559098 ignition[932]: INFO : Ignition 2.19.0 Jan 20 00:45:40.559098 ignition[932]: INFO : Stage: mount Jan 20 00:45:40.573497 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:45:40.573497 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:45:40.573497 ignition[932]: INFO : mount: mount passed Jan 20 00:45:40.573497 ignition[932]: INFO : Ignition finished successfully Jan 20 00:45:40.584380 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:45:40.633935 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:45:40.688276 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:45:40.725517 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Jan 20 00:45:40.738698 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:45:40.738807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:45:40.738899 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:45:40.758951 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:45:40.767405 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:45:40.908447 ignition[962]: INFO : Ignition 2.19.0 Jan 20 00:45:40.926950 ignition[962]: INFO : Stage: files Jan 20 00:45:40.926950 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:45:40.926950 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:45:40.926950 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:45:40.926950 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:45:40.926950 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:45:40.975511 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:45:40.975511 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:45:40.975511 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:45:40.960531 unknown[962]: wrote ssh authorized keys file for user: core Jan 20 00:45:41.011997 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 20 00:45:41.011997 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 20 00:45:41.011997 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:45:41.011997 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 00:45:41.457524 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 00:45:42.197075 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:45:42.197075 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:45:42.253281 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 00:45:43.030738 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 00:45:46.209202 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:45:46.209202 ignition[962]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 20 00:45:46.255211 ignition[962]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:45:46.541510 ignition[962]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:45:46.587773 ignition[962]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:45:46.600497 ignition[962]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:45:46.600497 ignition[962]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:45:46.600497 ignition[962]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:45:46.600497 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:45:46.600497 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:45:46.600497 ignition[962]: INFO : files: files passed Jan 20 00:45:46.600497 ignition[962]: INFO : Ignition finished successfully Jan 20 00:45:46.602632 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:45:46.682227 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:45:46.697432 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:45:46.705772 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:45:46.757974 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:45:46.706232 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:45:46.787639 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:45:46.787639 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:45:46.810048 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:45:46.819273 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:45:46.842005 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:45:46.896224 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:45:47.559392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:45:47.559653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:45:47.599114 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:45:47.607016 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:45:47.619132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:45:47.648226 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:45:47.727286 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:45:47.768370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:45:47.813142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:45:47.829226 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:45:47.837930 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:45:47.849182 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:45:47.849463 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:45:47.863560 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:45:47.880181 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:45:47.889391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:45:47.902140 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:45:47.921797 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:45:47.986532 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:45:48.012168 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:45:48.026311 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:45:48.031162 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:45:48.035067 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:45:48.043170 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:45:48.043443 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:45:48.054814 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:45:48.061197 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:45:48.068767 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:45:48.069132 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:45:48.087053 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:45:48.087351 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:45:48.141263 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:45:48.142731 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:45:48.174079 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:45:48.187447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:45:48.195349 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:45:48.226321 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:45:48.226728 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:45:48.258310 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:45:48.258502 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:45:48.263241 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:45:48.263413 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:45:48.308091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:45:48.308799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:45:48.324737 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:45:48.325057 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:45:48.369465 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:45:48.403535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:45:48.409063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:45:48.409308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:45:48.472188 ignition[1017]: INFO : Ignition 2.19.0 Jan 20 00:45:48.472188 ignition[1017]: INFO : Stage: umount Jan 20 00:45:48.472188 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:45:48.472188 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:45:48.426456 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:45:48.605603 ignition[1017]: INFO : umount: umount passed Jan 20 00:45:48.605603 ignition[1017]: INFO : Ignition finished successfully Jan 20 00:45:48.426643 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:45:48.480080 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:45:48.480256 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:45:48.490990 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:45:48.491205 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:45:48.506194 systemd[1]: Stopped target network.target - Network. Jan 20 00:45:48.511145 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:45:48.512934 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:45:48.516032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:45:48.516137 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:45:48.516275 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:45:48.516362 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:45:48.516486 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:45:48.516563 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:45:48.522212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:45:48.523770 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:45:48.532087 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:45:48.563409 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:45:48.566011 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:45:48.569459 systemd-networkd[789]: eth0: DHCPv6 lease lost Jan 20 00:45:48.571616 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:45:48.575876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:45:48.577301 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:45:48.577542 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:45:48.579963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:45:48.580060 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:45:48.605907 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:45:48.606269 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:45:48.619504 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:45:48.619622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:45:48.817444 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:45:48.826073 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:45:48.827473 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:45:48.841408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:45:48.841534 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:45:48.857354 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:45:48.866662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:45:48.905995 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:45:48.965561 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:45:48.966212 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:45:48.989192 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:45:48.989340 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:45:49.010450 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:45:49.010554 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:45:49.033598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:45:49.033790 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:45:49.055611 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:45:49.055804 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:45:49.085014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:45:49.085165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:45:49.144573 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:45:49.156923 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:45:49.158123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:45:49.183298 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:45:49.183529 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:45:49.191044 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:45:49.191249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:45:49.196990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:45:49.197134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:45:49.228515 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:45:49.228774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:45:49.247179 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:45:49.247365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:45:49.306616 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:45:49.424328 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:45:49.457883 systemd[1]: Switching root. Jan 20 00:45:49.597626 systemd-journald[195]: Journal stopped Jan 20 00:45:58.260704 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 20 00:45:58.261040 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:45:58.261106 kernel: SELinux: policy capability open_perms=1 Jan 20 00:45:58.261129 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:45:58.261151 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:45:58.261181 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:45:58.261232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:45:58.261279 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:45:58.261301 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:45:58.261349 kernel: audit: type=1403 audit(1768869950.185:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:45:58.261388 systemd[1]: Successfully loaded SELinux policy in 98.680ms. Jan 20 00:45:58.261484 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 42.371ms. Jan 20 00:45:58.261513 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:45:58.261536 systemd[1]: Detected virtualization kvm. Jan 20 00:45:58.261557 systemd[1]: Detected architecture x86-64. Jan 20 00:45:58.261606 systemd[1]: Detected first boot. Jan 20 00:45:58.261627 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:45:58.261645 zram_generator::config[1084]: No configuration found. Jan 20 00:45:58.261713 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:45:58.261770 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:45:58.261793 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:45:58.261815 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:45:58.261893 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:45:58.261915 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:45:58.261938 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:45:58.261971 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:45:58.262000 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:45:58.262021 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:45:58.262041 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:45:58.262059 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:45:58.262080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:45:58.262101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:45:58.262122 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:45:58.262143 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:45:58.262164 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:45:58.262191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:45:58.262214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:45:58.262233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:45:58.262252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:45:58.262271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:45:58.262292 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:45:58.262314 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:45:58.262335 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:45:58.262362 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:45:58.262385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:45:58.262408 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:45:58.262433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:45:58.262454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:45:58.262474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:45:58.262494 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:45:58.262554 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:45:58.262579 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:45:58.262600 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:45:58.262628 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:45:58.262649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:45:58.262671 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:45:58.262692 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:45:58.262714 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:45:58.265922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:45:58.265964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:45:58.265988 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:45:58.266019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:45:58.266043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:45:58.266065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:45:58.266088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:45:58.266110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:45:58.266132 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:45:58.266201 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 20 00:45:58.266228 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 20 00:45:58.266259 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:45:58.266282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:45:58.266303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:45:58.266325 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:45:58.266404 systemd-journald[1172]: Collecting audit messages is disabled. Jan 20 00:45:58.266444 systemd-journald[1172]: Journal started Jan 20 00:45:58.266487 systemd-journald[1172]: Runtime Journal (/run/log/journal/53980ab28ee84b6a84accb9d4f2a08c1) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:45:58.783949 kernel: loop: module loaded Jan 20 00:45:58.814021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:45:58.850897 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:45:58.850993 kernel: fuse: init (API version 7.39) Jan 20 00:45:58.895914 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:45:58.916216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:45:58.938872 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:45:58.972657 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:45:58.988325 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:45:59.003489 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:45:59.016352 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:45:59.026732 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:45:59.042696 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:45:59.060068 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:45:59.060461 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:45:59.080393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:45:59.080798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:45:59.093294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:45:59.095487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:45:59.108634 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:45:59.111516 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:45:59.124785 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:45:59.127163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:45:59.149297 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:45:59.187308 kernel: ACPI: bus type drm_connector registered Jan 20 00:45:59.193227 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:45:59.220216 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:45:59.235326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:45:59.260816 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:45:59.361664 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:45:59.394194 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:45:59.442588 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:45:59.461402 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:45:59.489033 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:45:59.534265 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:45:59.547215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:45:59.582889 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:45:59.604301 systemd-journald[1172]: Time spent on flushing to /var/log/journal/53980ab28ee84b6a84accb9d4f2a08c1 is 386.509ms for 973 entries. Jan 20 00:45:59.604301 systemd-journald[1172]: System Journal (/var/log/journal/53980ab28ee84b6a84accb9d4f2a08c1) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:46:00.446959 systemd-journald[1172]: Received client request to flush runtime journal. Jan 20 00:46:00.404157 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:46:00.418318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:46:00.444156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:46:00.488066 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:46:00.502324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:46:00.517692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:46:00.542918 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:46:00.558554 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:46:00.623539 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:46:00.664204 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:46:00.775679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:46:00.811081 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jan 20 00:46:00.811115 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jan 20 00:46:00.837581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:46:00.859152 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:46:00.874095 udevadm[1230]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 00:46:00.994455 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:46:01.115281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:46:01.721965 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 20 00:46:01.725492 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 20 00:46:01.750484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:46:03.490637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:46:03.563903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:46:03.715337 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Jan 20 00:46:04.015644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:46:04.103723 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:46:04.176268 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:46:04.472180 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 20 00:46:04.850049 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1262) Jan 20 00:46:05.689108 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 00:46:05.692240 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:46:05.756013 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:46:05.911388 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 20 00:46:05.946075 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 00:46:05.946622 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:46:05.957680 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:46:05.961109 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:46:06.146107 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:46:06.298217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:46:06.309017 systemd-networkd[1264]: lo: Link UP Jan 20 00:46:06.309580 systemd-networkd[1264]: lo: Gained carrier Jan 20 00:46:06.318968 systemd-networkd[1264]: Enumeration completed Jan 20 00:46:06.320126 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:46:06.320565 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:46:06.320678 systemd-networkd[1264]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:46:06.327537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:46:06.328123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:46:06.333976 systemd-networkd[1264]: eth0: Link UP Jan 20 00:46:06.334027 systemd-networkd[1264]: eth0: Gained carrier Jan 20 00:46:06.334054 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:46:06.349100 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:46:06.439769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:46:06.491069 systemd-networkd[1264]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:46:06.735995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:46:06.917480 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:46:07.021743 kernel: kvm_amd: TSC scaling supported Jan 20 00:46:07.022041 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:46:07.022086 kernel: kvm_amd: Nested Paging enabled Jan 20 00:46:07.026697 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:46:07.026952 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:46:07.681052 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:46:07.729361 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:46:07.752370 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:46:07.813131 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:46:07.896358 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:46:07.910701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:46:07.946154 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:46:07.971960 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:46:08.057199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:46:08.076945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:46:08.082635 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:46:08.083045 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:46:08.091605 systemd[1]: Reached target machines.target - Containers. Jan 20 00:46:08.101164 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:46:08.126591 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:46:08.145364 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:46:08.155669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:46:08.160571 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:46:08.180551 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:46:08.204169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:46:08.206027 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:46:08.239287 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:46:08.254001 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:46:08.263611 systemd-networkd[1264]: eth0: Gained IPv6LL Jan 20 00:46:08.283708 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:46:08.357368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:46:08.360358 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:46:08.362901 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:46:08.498957 kernel: loop1: detected capacity change from 0 to 142488 Jan 20 00:46:08.819780 kernel: loop2: detected capacity change from 0 to 224512 Jan 20 00:46:08.965964 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:46:09.294942 kernel: loop4: detected capacity change from 0 to 142488 Jan 20 00:46:09.381960 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 00:46:09.443993 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:46:09.446531 (sd-merge)[1323]: Merged extensions into '/usr'. Jan 20 00:46:09.462196 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:46:09.462260 systemd[1]: Reloading... Jan 20 00:46:09.917224 zram_generator::config[1352]: No configuration found. Jan 20 00:46:11.036360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:46:11.175120 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:46:11.210465 systemd[1]: Reloading finished in 1747 ms. Jan 20 00:46:11.247686 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:46:11.299586 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:46:11.352786 systemd[1]: Starting ensure-sysext.service... Jan 20 00:46:11.387350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:46:11.407334 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:46:11.407386 systemd[1]: Reloading... Jan 20 00:46:11.750135 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:46:11.750728 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:46:11.752478 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:46:11.755078 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 20 00:46:11.755211 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 20 00:46:11.765146 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:46:11.765193 systemd-tmpfiles[1395]: Skipping /boot Jan 20 00:46:11.781873 zram_generator::config[1418]: No configuration found. Jan 20 00:46:11.826786 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:46:11.826966 systemd-tmpfiles[1395]: Skipping /boot Jan 20 00:46:12.650917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:46:12.802201 systemd[1]: Reloading finished in 1394 ms. Jan 20 00:46:12.888880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:46:12.940532 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:46:12.990646 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:46:13.010379 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:46:13.029122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:46:13.054013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:46:13.084643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:46:13.085059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:46:13.095302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:46:13.114736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:46:13.129356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:46:13.135535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:46:13.138683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:46:13.162654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:46:13.164252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:46:13.174997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:46:13.178336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:46:13.191007 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:46:13.191361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:46:13.199785 augenrules[1493]: No rules Jan 20 00:46:13.203936 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:46:13.218443 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:46:13.238751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:46:13.239752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:46:13.259410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:46:13.273062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:46:13.288385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:46:13.299794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:46:13.315962 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:46:13.323224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:46:13.332260 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:46:13.342370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:46:13.344011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:46:13.355553 systemd-resolved[1472]: Positive Trust Anchors: Jan 20 00:46:13.355577 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:46:13.355631 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:46:13.363123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:46:13.363469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:46:13.374690 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:46:13.376235 systemd-resolved[1472]: Defaulting to hostname 'linux'. Jan 20 00:46:13.381290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:46:13.401176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:46:13.401699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:46:13.419249 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:46:13.458053 systemd[1]: Reached target network.target - Network. Jan 20 00:46:13.465321 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:46:13.473335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:46:13.482110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:46:13.482495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:46:13.500424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:46:13.509808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:46:13.521622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:46:13.532313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:46:13.539580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:46:13.541989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:46:13.542209 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:46:13.545952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:46:13.546306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:46:13.557350 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:46:13.557735 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:46:13.564720 systemd[1]: Finished ensure-sysext.service. Jan 20 00:46:13.589698 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:46:13.590183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:46:13.600597 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:46:13.601981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:46:13.628513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:46:13.628715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:46:13.647204 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:46:14.093065 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:46:14.113493 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:46:15.625254 systemd-resolved[1472]: Clock change detected. Flushing caches. Jan 20 00:46:15.625294 systemd-timesyncd[1539]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:46:15.625384 systemd-timesyncd[1539]: Initial clock synchronization to Tue 2026-01-20 00:46:15.625029 UTC. Jan 20 00:46:15.625664 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:46:15.644190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:46:15.652757 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:46:15.667574 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:46:15.668113 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:46:15.679228 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:46:15.684369 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:46:15.697359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:46:15.709610 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:46:15.726329 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:46:15.740597 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:46:15.755563 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:46:15.769913 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:46:15.793298 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:46:15.804565 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:46:15.808738 systemd[1]: System is tainted: cgroupsv1 Jan 20 00:46:15.808847 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:46:15.808893 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:46:15.820485 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:46:15.842775 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:46:15.872395 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:46:15.882228 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:46:15.893278 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:46:15.908231 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:46:15.917452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:46:15.935620 jq[1547]: false Jan 20 00:46:15.955621 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:46:15.970289 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:46:15.976865 extend-filesystems[1549]: Found loop3 Jan 20 00:46:15.976865 extend-filesystems[1549]: Found loop4 Jan 20 00:46:15.976865 extend-filesystems[1549]: Found loop5 Jan 20 00:46:15.976865 extend-filesystems[1549]: Found sr0 Jan 20 00:46:15.976865 extend-filesystems[1549]: Found vda Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda1 Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda2 Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda3 Jan 20 00:46:16.017072 extend-filesystems[1549]: Found usr Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda4 Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda6 Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda7 Jan 20 00:46:16.017072 extend-filesystems[1549]: Found vda9 Jan 20 00:46:16.017072 extend-filesystems[1549]: Checking size of /dev/vda9 Jan 20 00:46:16.102724 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:46:16.102771 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1576) Jan 20 00:46:15.999591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:46:16.103106 extend-filesystems[1549]: Resized partition /dev/vda9 Jan 20 00:46:16.040118 dbus-daemon[1546]: [system] SELinux support is enabled Jan 20 00:46:16.028327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:46:16.128355 extend-filesystems[1565]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:46:16.113219 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:46:16.137332 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:46:16.152576 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:46:16.159647 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:46:16.159199 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:46:16.170163 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:46:16.183157 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:46:16.202634 jq[1591]: true Jan 20 00:46:16.210126 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:46:16.210126 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:46:16.210126 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:46:16.246936 extend-filesystems[1549]: Resized filesystem in /dev/vda9 Jan 20 00:46:16.238701 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:46:16.258322 update_engine[1589]: I20260120 00:46:16.231137 1589 main.cc:92] Flatcar Update Engine starting Jan 20 00:46:16.258322 update_engine[1589]: I20260120 00:46:16.233879 1589 update_check_scheduler.cc:74] Next update check in 5m42s Jan 20 00:46:16.239302 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:46:16.239928 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:46:16.240551 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:46:16.266654 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:46:16.267207 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:46:16.284529 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:46:16.305460 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:46:16.307167 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:46:16.353926 systemd-logind[1586]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:46:16.354063 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:46:16.363190 jq[1601]: true Jan 20 00:46:16.354804 systemd-logind[1586]: New seat seat0. Jan 20 00:46:16.359849 (ntainerd)[1602]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:46:16.361469 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:46:16.386116 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:46:16.386696 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:46:16.412295 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:46:16.450712 tar[1600]: linux-amd64/LICENSE Jan 20 00:46:16.455494 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 00:46:16.462144 tar[1600]: linux-amd64/helm Jan 20 00:46:16.467395 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:46:16.486324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:46:16.584144 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:46:16.658446 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:46:16.663050 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:46:16.664637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:46:16.664893 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:46:16.680165 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:46:16.680420 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:46:16.695809 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:46:16.724694 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:46:16.779297 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:46:16.895368 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:46:16.895888 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:46:16.945885 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:46:16.976359 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:46:17.099874 locksmithd[1649]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:46:17.582410 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:46:17.631287 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:46:17.661565 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:46:17.754853 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:46:18.295310 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:46:18.373755 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:45436.service - OpenSSH per-connection server daemon (10.0.0.1:45436). Jan 20 00:46:19.209530 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 45436 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:19.216486 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:19.249656 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:46:19.288910 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:46:19.497222 containerd[1602]: time="2026-01-20T00:46:19.494341006Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:46:19.507127 systemd-logind[1586]: New session 1 of user core. Jan 20 00:46:19.625914 containerd[1602]: time="2026-01-20T00:46:19.625847000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.633343486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.633400261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.633424908Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.633803675Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.633862305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.636106233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:46:19.636317 containerd[1602]: time="2026-01-20T00:46:19.636136119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.637084 containerd[1602]: time="2026-01-20T00:46:19.636940652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:46:19.637186 containerd[1602]: time="2026-01-20T00:46:19.637163387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.637271 containerd[1602]: time="2026-01-20T00:46:19.637247554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:46:19.637348 containerd[1602]: time="2026-01-20T00:46:19.637326542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.637632 containerd[1602]: time="2026-01-20T00:46:19.637604621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.638322 containerd[1602]: time="2026-01-20T00:46:19.638295211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:46:19.638686 containerd[1602]: time="2026-01-20T00:46:19.638658809Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:46:19.638763 containerd[1602]: time="2026-01-20T00:46:19.638745020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:46:19.640075 containerd[1602]: time="2026-01-20T00:46:19.638939103Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:46:19.640373 containerd[1602]: time="2026-01-20T00:46:19.640345518Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:46:19.677775 containerd[1602]: time="2026-01-20T00:46:19.677611673Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:46:19.680347 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:46:19.687235 containerd[1602]: time="2026-01-20T00:46:19.681691339Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:46:19.687235 containerd[1602]: time="2026-01-20T00:46:19.681799781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:46:19.687235 containerd[1602]: time="2026-01-20T00:46:19.681858591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:46:19.687235 containerd[1602]: time="2026-01-20T00:46:19.681886062Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:46:19.689713 containerd[1602]: time="2026-01-20T00:46:19.689602919Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.693182432Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694082803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694195774Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694225519Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694366492Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694481487Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694507696Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694622270Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694732887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.695219 containerd[1602]: time="2026-01-20T00:46:19.694848252Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.697209 containerd[1602]: time="2026-01-20T00:46:19.694881504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.697684126Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.697901844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698062163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698099493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698127245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698155879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698185574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698211993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698240296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698269260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698337758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698365840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698394995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698423147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702066 containerd[1602]: time="2026-01-20T00:46:19.698455377Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698512735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698545116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698571915Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698682391Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698750488Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698780855Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698808036Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698830388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698857809Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698916989Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:46:19.702697 containerd[1602]: time="2026-01-20T00:46:19.698944751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:46:19.705413 containerd[1602]: time="2026-01-20T00:46:19.701668797Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:46:19.705413 containerd[1602]: time="2026-01-20T00:46:19.701806003Z" level=info msg="Connect containerd service" Jan 20 00:46:19.705818 containerd[1602]: time="2026-01-20T00:46:19.701943720Z" level=info msg="using legacy CRI server" Jan 20 00:46:19.705909 containerd[1602]: time="2026-01-20T00:46:19.705881141Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:46:19.706310 containerd[1602]: time="2026-01-20T00:46:19.706278443Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:46:19.707784 containerd[1602]: time="2026-01-20T00:46:19.707746954Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:46:19.711276 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:46:19.722240 containerd[1602]: time="2026-01-20T00:46:19.722190319Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:46:19.722450 containerd[1602]: time="2026-01-20T00:46:19.722426711Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:46:19.722701 containerd[1602]: time="2026-01-20T00:46:19.722626534Z" level=info msg="Start subscribing containerd event" Jan 20 00:46:19.901748 containerd[1602]: time="2026-01-20T00:46:19.901195274Z" level=info msg="Start recovering state" Jan 20 00:46:19.901748 containerd[1602]: time="2026-01-20T00:46:19.901731104Z" level=info msg="Start event monitor" Jan 20 00:46:19.907444 containerd[1602]: time="2026-01-20T00:46:19.907396778Z" level=info msg="Start snapshots syncer" Jan 20 00:46:19.907875 containerd[1602]: time="2026-01-20T00:46:19.907840257Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:46:19.908154 containerd[1602]: time="2026-01-20T00:46:19.908123676Z" level=info msg="Start streaming server" Jan 20 00:46:19.908856 containerd[1602]: time="2026-01-20T00:46:19.908828552Z" level=info msg="containerd successfully booted in 0.629371s" Jan 20 00:46:19.919730 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:46:19.939285 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:46:20.211799 tar[1600]: linux-amd64/README.md Jan 20 00:46:20.253468 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:46:20.314396 systemd[1681]: Queued start job for default target default.target. Jan 20 00:46:20.315299 systemd[1681]: Created slice app.slice - User Application Slice. Jan 20 00:46:20.315368 systemd[1681]: Reached target paths.target - Paths. Jan 20 00:46:20.315391 systemd[1681]: Reached target timers.target - Timers. Jan 20 00:46:20.362306 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:46:20.391172 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:46:20.391302 systemd[1681]: Reached target sockets.target - Sockets. Jan 20 00:46:20.391325 systemd[1681]: Reached target basic.target - Basic System. Jan 20 00:46:20.391564 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:46:20.391768 systemd[1681]: Reached target default.target - Main User Target. Jan 20 00:46:20.391851 systemd[1681]: Startup finished in 426ms. Jan 20 00:46:20.511237 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:46:20.615684 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:45440.service - OpenSSH per-connection server daemon (10.0.0.1:45440). Jan 20 00:46:20.795760 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 45440 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:21.371493 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:21.409551 systemd-logind[1586]: New session 2 of user core. Jan 20 00:46:21.438368 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:46:21.596093 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:21.615499 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:45454.service - OpenSSH per-connection server daemon (10.0.0.1:45454). Jan 20 00:46:21.703769 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:45440.service: Deactivated successfully. Jan 20 00:46:21.723445 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:46:21.727100 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:46:21.740752 systemd-logind[1586]: Removed session 2. Jan 20 00:46:21.857812 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 45454 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:21.862437 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:21.900593 systemd-logind[1586]: New session 3 of user core. Jan 20 00:46:21.926715 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:46:22.086655 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:22.106823 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:45454.service: Deactivated successfully. Jan 20 00:46:22.116456 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:46:22.116457 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:46:22.122757 systemd-logind[1586]: Removed session 3. Jan 20 00:46:23.251400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:46:23.622132 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:46:23.623249 systemd[1]: Startup finished in 27.172s (kernel) + 32.023s (userspace) = 59.195s. Jan 20 00:46:23.639141 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:46:27.260913 kubelet[1727]: E0120 00:46:27.252563 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:46:27.283273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:46:27.284803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:46:32.129460 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:50956.service - OpenSSH per-connection server daemon (10.0.0.1:50956). Jan 20 00:46:32.250099 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 50956 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:32.248340 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:32.281261 systemd-logind[1586]: New session 4 of user core. Jan 20 00:46:32.290677 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:46:32.392022 sshd[1737]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:32.440606 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:36630.service - OpenSSH per-connection server daemon (10.0.0.1:36630). Jan 20 00:46:32.441600 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:50956.service: Deactivated successfully. Jan 20 00:46:32.457821 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:46:32.460305 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:46:32.468733 systemd-logind[1586]: Removed session 4. Jan 20 00:46:32.551295 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 36630 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:32.555652 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:32.571487 systemd-logind[1586]: New session 5 of user core. Jan 20 00:46:32.582628 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:46:32.658220 sshd[1742]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:32.683478 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:36640.service - OpenSSH per-connection server daemon (10.0.0.1:36640). Jan 20 00:46:32.684545 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:36630.service: Deactivated successfully. Jan 20 00:46:32.694680 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:46:32.697835 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:46:32.706813 systemd-logind[1586]: Removed session 5. Jan 20 00:46:32.746133 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 36640 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:32.750398 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:32.770441 systemd-logind[1586]: New session 6 of user core. Jan 20 00:46:32.781637 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:46:32.900818 sshd[1750]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:32.959367 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:36646.service - OpenSSH per-connection server daemon (10.0.0.1:36646). Jan 20 00:46:32.960650 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:36640.service: Deactivated successfully. Jan 20 00:46:32.963517 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:46:32.965845 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:46:32.974776 systemd-logind[1586]: Removed session 6. Jan 20 00:46:33.030777 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 36646 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:33.032944 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:33.044485 systemd-logind[1586]: New session 7 of user core. Jan 20 00:46:33.054709 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:46:33.180381 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:46:33.183176 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:46:33.226472 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 20 00:46:33.235629 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:33.260640 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). Jan 20 00:46:33.262665 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:36646.service: Deactivated successfully. Jan 20 00:46:33.274278 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:46:33.280894 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:46:33.286141 systemd-logind[1586]: Removed session 7. Jan 20 00:46:33.353242 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:33.356927 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:33.380485 systemd-logind[1586]: New session 8 of user core. Jan 20 00:46:33.393638 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:46:33.474822 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:46:33.477529 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:46:33.491390 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 20 00:46:33.512687 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:46:33.513817 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:46:33.558487 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:46:33.572567 auditctl[1778]: No rules Jan 20 00:46:33.574250 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:46:33.578647 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:46:33.602762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:46:33.685820 augenrules[1797]: No rules Jan 20 00:46:33.688457 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:46:33.692476 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 20 00:46:33.697178 sshd[1767]: pam_unix(sshd:session): session closed for user core Jan 20 00:46:33.716576 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Jan 20 00:46:33.717796 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:36648.service: Deactivated successfully. Jan 20 00:46:33.724114 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:46:33.728523 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:46:33.730863 systemd-logind[1586]: Removed session 8. Jan 20 00:46:33.787617 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:46:33.790573 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:46:33.806248 systemd-logind[1586]: New session 9 of user core. Jan 20 00:46:33.813776 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:46:33.890840 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:46:33.891581 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:46:34.673461 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:46:34.702345 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:46:35.405353 dockerd[1828]: time="2026-01-20T00:46:35.405203161Z" level=info msg="Starting up" Jan 20 00:46:35.994327 dockerd[1828]: time="2026-01-20T00:46:35.993654374Z" level=info msg="Loading containers: start." Jan 20 00:46:36.374830 kernel: Initializing XFRM netlink socket Jan 20 00:46:36.649668 systemd-networkd[1264]: docker0: Link UP Jan 20 00:46:36.696302 dockerd[1828]: time="2026-01-20T00:46:36.696174641Z" level=info msg="Loading containers: done." Jan 20 00:46:36.788818 dockerd[1828]: time="2026-01-20T00:46:36.787813573Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:46:36.790597 dockerd[1828]: time="2026-01-20T00:46:36.789280792Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:46:36.790597 dockerd[1828]: time="2026-01-20T00:46:36.789473752Z" level=info msg="Daemon has completed initialization" Jan 20 00:46:36.947266 dockerd[1828]: time="2026-01-20T00:46:36.945463198Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:46:36.945849 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:46:37.677437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:46:37.700472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:46:40.817345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:46:40.849039 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:46:41.179233 kubelet[1987]: E0120 00:46:41.176534 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:46:41.196074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:46:41.201685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:46:42.804264 containerd[1602]: time="2026-01-20T00:46:42.802937801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 00:46:45.137282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924692360.mount: Deactivated successfully. Jan 20 00:46:51.237911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:46:51.261349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:46:51.475254 containerd[1602]: time="2026-01-20T00:46:51.474861975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:46:51.489566 containerd[1602]: time="2026-01-20T00:46:51.487314171Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 00:46:51.494398 containerd[1602]: time="2026-01-20T00:46:51.492504006Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:46:51.593267 containerd[1602]: time="2026-01-20T00:46:51.592535348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:46:52.197336 containerd[1602]: time="2026-01-20T00:46:52.196643764Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 9.393226537s" Jan 20 00:46:52.197336 containerd[1602]: time="2026-01-20T00:46:52.196765602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 00:46:52.206650 containerd[1602]: time="2026-01-20T00:46:52.203853570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 00:46:52.825376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:46:52.871352 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:46:53.552643 kubelet[2067]: E0120 00:46:53.552302 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:46:53.564359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:46:53.565045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:46:57.762643 containerd[1602]: time="2026-01-20T00:46:57.758120155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:46:57.765196 containerd[1602]: time="2026-01-20T00:46:57.764003119Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 00:46:57.768819 containerd[1602]: time="2026-01-20T00:46:57.768720493Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:46:57.779061 containerd[1602]: time="2026-01-20T00:46:57.778751696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:46:57.783409 containerd[1602]: time="2026-01-20T00:46:57.783305126Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 5.579407322s" Jan 20 00:46:57.783523 containerd[1602]: time="2026-01-20T00:46:57.783410865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 00:46:57.786776 containerd[1602]: time="2026-01-20T00:46:57.786638762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 00:47:00.969864 containerd[1602]: time="2026-01-20T00:47:00.969728745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:00.973202 containerd[1602]: time="2026-01-20T00:47:00.972917591Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 00:47:00.976701 containerd[1602]: time="2026-01-20T00:47:00.976565029Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:00.987819 containerd[1602]: time="2026-01-20T00:47:00.987655981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:00.991024 containerd[1602]: time="2026-01-20T00:47:00.990838608Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.204138261s" Jan 20 00:47:00.991024 containerd[1602]: time="2026-01-20T00:47:00.990914751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 00:47:00.992860 containerd[1602]: time="2026-01-20T00:47:00.992736774Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 00:47:01.538690 update_engine[1589]: I20260120 00:47:01.493824 1589 update_attempter.cc:509] Updating boot flags... Jan 20 00:47:02.081434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2092) Jan 20 00:47:03.830582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 00:47:03.876777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:04.216367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:04.228123 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:47:05.226419 kubelet[2114]: E0120 00:47:05.226164 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:47:05.236824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:47:05.237509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:47:05.891493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966543943.mount: Deactivated successfully. Jan 20 00:47:08.854390 containerd[1602]: time="2026-01-20T00:47:08.852432459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:08.854390 containerd[1602]: time="2026-01-20T00:47:08.853763436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 00:47:08.856542 containerd[1602]: time="2026-01-20T00:47:08.855659838Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:08.868675 containerd[1602]: time="2026-01-20T00:47:08.863465914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:08.868675 containerd[1602]: time="2026-01-20T00:47:08.864464330Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 7.871611059s" Jan 20 00:47:08.868675 containerd[1602]: time="2026-01-20T00:47:08.864525424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 00:47:08.868675 containerd[1602]: time="2026-01-20T00:47:08.867194211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 00:47:09.833596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559204636.mount: Deactivated successfully. Jan 20 00:47:14.630497 containerd[1602]: time="2026-01-20T00:47:14.629424573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:14.634076 containerd[1602]: time="2026-01-20T00:47:14.633898250Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 00:47:14.638076 containerd[1602]: time="2026-01-20T00:47:14.637111142Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:14.652255 containerd[1602]: time="2026-01-20T00:47:14.652142061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:14.654639 containerd[1602]: time="2026-01-20T00:47:14.654546714Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 5.787305635s" Jan 20 00:47:14.654639 containerd[1602]: time="2026-01-20T00:47:14.654630971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 00:47:14.657137 containerd[1602]: time="2026-01-20T00:47:14.657034481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:47:15.371778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 00:47:15.381785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:15.392137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount803413074.mount: Deactivated successfully. Jan 20 00:47:15.409116 containerd[1602]: time="2026-01-20T00:47:15.409007700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:15.412885 containerd[1602]: time="2026-01-20T00:47:15.412800726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:47:15.415336 containerd[1602]: time="2026-01-20T00:47:15.415263434Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:15.427608 containerd[1602]: time="2026-01-20T00:47:15.427489233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:15.429053 containerd[1602]: time="2026-01-20T00:47:15.428647890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 771.540082ms" Jan 20 00:47:15.429053 containerd[1602]: time="2026-01-20T00:47:15.428707982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:47:15.431857 containerd[1602]: time="2026-01-20T00:47:15.431774507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 00:47:16.091694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:16.099688 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:47:16.336649 kubelet[2195]: E0120 00:47:16.334812 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:47:16.353450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:47:16.353918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:47:16.425028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452393393.mount: Deactivated successfully. Jan 20 00:47:24.149386 containerd[1602]: time="2026-01-20T00:47:24.148738918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:24.153487 containerd[1602]: time="2026-01-20T00:47:24.153249832Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 00:47:24.156286 containerd[1602]: time="2026-01-20T00:47:24.156151010Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:24.169807 containerd[1602]: time="2026-01-20T00:47:24.169650334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:24.172476 containerd[1602]: time="2026-01-20T00:47:24.172355727Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.74053381s" Jan 20 00:47:24.173609 containerd[1602]: time="2026-01-20T00:47:24.173513418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 00:47:26.468207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 00:47:26.491344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:26.954530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:26.961159 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:47:27.197309 kubelet[2297]: E0120 00:47:27.197215 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:47:27.216133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:47:27.216517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:47:29.791826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:29.835661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:30.260546 systemd[1]: Reloading requested from client PID 2314 ('systemctl') (unit session-9.scope)... Jan 20 00:47:30.260616 systemd[1]: Reloading... Jan 20 00:47:30.548060 zram_generator::config[2353]: No configuration found. Jan 20 00:47:30.943524 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:47:31.143115 systemd[1]: Reloading finished in 880 ms. Jan 20 00:47:31.296764 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:47:31.297072 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:47:31.297671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:31.327439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:31.795931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:31.867216 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:47:32.162876 kubelet[2413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:47:32.162876 kubelet[2413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:47:32.162876 kubelet[2413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:47:32.162876 kubelet[2413]: I0120 00:47:32.162275 2413 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:47:32.882733 kubelet[2413]: I0120 00:47:32.882030 2413 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:47:32.882733 kubelet[2413]: I0120 00:47:32.882102 2413 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:47:32.885078 kubelet[2413]: I0120 00:47:32.883377 2413 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:47:32.994886 kubelet[2413]: E0120 00:47:32.994816 2413 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:32.998749 kubelet[2413]: I0120 00:47:32.997710 2413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:47:33.059187 kubelet[2413]: E0120 00:47:33.058275 2413 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:47:33.059187 kubelet[2413]: I0120 00:47:33.058332 2413 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:47:33.091682 kubelet[2413]: I0120 00:47:33.091593 2413 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:47:33.092818 kubelet[2413]: I0120 00:47:33.092699 2413 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:47:33.093310 kubelet[2413]: I0120 00:47:33.092799 2413 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 20 00:47:33.093666 kubelet[2413]: I0120 00:47:33.093323 2413 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:47:33.093666 kubelet[2413]: I0120 00:47:33.093341 2413 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:47:33.096181 kubelet[2413]: I0120 00:47:33.096082 2413 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:47:33.125575 kubelet[2413]: I0120 00:47:33.122159 2413 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:47:33.125575 kubelet[2413]: I0120 00:47:33.122251 2413 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:47:33.125575 kubelet[2413]: I0120 00:47:33.122310 2413 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:47:33.125575 kubelet[2413]: I0120 00:47:33.122348 2413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:47:33.137704 kubelet[2413]: W0120 00:47:33.134024 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:33.137704 kubelet[2413]: E0120 00:47:33.134142 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:33.137704 kubelet[2413]: I0120 00:47:33.134373 2413 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:47:33.137704 kubelet[2413]: I0120 00:47:33.135135 2413 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:47:33.137704 kubelet[2413]: W0120 00:47:33.135254 2413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:47:33.139170 kubelet[2413]: W0120 00:47:33.138638 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:33.139170 kubelet[2413]: E0120 00:47:33.138720 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:33.150695 kubelet[2413]: I0120 00:47:33.149521 2413 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:47:33.150695 kubelet[2413]: I0120 00:47:33.149608 2413 server.go:1287] "Started kubelet" Jan 20 00:47:33.153581 kubelet[2413]: I0120 00:47:33.151676 2413 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:47:33.153581 kubelet[2413]: I0120 00:47:33.153396 2413 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:47:33.158060 kubelet[2413]: I0120 00:47:33.156239 2413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:47:33.158060 kubelet[2413]: I0120 00:47:33.157052 2413 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:47:33.165621 kubelet[2413]: I0120 00:47:33.162698 2413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:47:33.165621 kubelet[2413]: I0120 00:47:33.162839 2413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:47:33.165621 kubelet[2413]: E0120 00:47:33.157222 2413 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c49f103ce1d7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:47:33.149547901 +0000 UTC m=+1.213854934,LastTimestamp:2026-01-20 00:47:33.149547901 +0000 UTC m=+1.213854934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:47:33.166571 kubelet[2413]: I0120 00:47:33.165816 2413 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:47:33.166571 kubelet[2413]: I0120 00:47:33.166024 2413 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:47:33.166571 kubelet[2413]: I0120 00:47:33.166095 2413 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:47:33.168792 kubelet[2413]: W0120 00:47:33.168392 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:33.171813 kubelet[2413]: E0120 00:47:33.170579 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:47:33.171813 kubelet[2413]: E0120 00:47:33.171001 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Jan 20 00:47:33.171813 kubelet[2413]: E0120 00:47:33.171621 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:33.173919 kubelet[2413]: E0120 00:47:33.173845 2413 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:47:33.180885 kubelet[2413]: I0120 00:47:33.180853 2413 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:47:33.181181 kubelet[2413]: I0120 00:47:33.181163 2413 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:47:33.185647 kubelet[2413]: I0120 00:47:33.185097 2413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:47:33.248128 kubelet[2413]: I0120 00:47:33.247893 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:47:33.256631 kubelet[2413]: I0120 00:47:33.254337 2413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:47:33.256631 kubelet[2413]: I0120 00:47:33.254516 2413 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:47:33.256631 kubelet[2413]: I0120 00:47:33.254561 2413 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:47:33.256631 kubelet[2413]: I0120 00:47:33.254599 2413 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:47:33.256631 kubelet[2413]: E0120 00:47:33.254683 2413 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:47:33.256631 kubelet[2413]: W0120 00:47:33.255386 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:33.256631 kubelet[2413]: E0120 00:47:33.255434 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:33.272561 kubelet[2413]: E0120 00:47:33.272301 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:47:33.274063 kubelet[2413]: I0120 00:47:33.273803 2413 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:47:33.274063 kubelet[2413]: I0120 00:47:33.273829 2413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:47:33.274063 kubelet[2413]: I0120 00:47:33.273873 2413 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:47:33.362397 kubelet[2413]: E0120 00:47:33.360736 2413 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 00:47:33.373351 kubelet[2413]: E0120 00:47:33.373190 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:47:33.377726 kubelet[2413]: E0120 00:47:33.377415 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Jan 20 00:47:33.418038 kubelet[2413]: I0120 00:47:33.412722 2413 policy_none.go:49] "None policy: Start" Jan 20 00:47:33.418038 kubelet[2413]: I0120 00:47:33.414612 2413 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:47:33.428275 kubelet[2413]: I0120 00:47:33.422556 2413 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:47:33.458306 kubelet[2413]: I0120 00:47:33.458257 2413 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:47:33.461022 kubelet[2413]: I0120 00:47:33.459017 2413 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:47:33.461022 kubelet[2413]: I0120 00:47:33.459072 2413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:47:33.461022 kubelet[2413]: I0120 00:47:33.460460 2413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:47:33.477847 kubelet[2413]: E0120 00:47:33.476605 2413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:47:33.477847 kubelet[2413]: E0120 00:47:33.476692 2413 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:47:33.566103 kubelet[2413]: I0120 00:47:33.566021 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:33.574310 kubelet[2413]: I0120 00:47:33.574184 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8291a7627b19eedc11728f83f8ae425c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8291a7627b19eedc11728f83f8ae425c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:33.574310 kubelet[2413]: I0120 00:47:33.574263 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8291a7627b19eedc11728f83f8ae425c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8291a7627b19eedc11728f83f8ae425c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:33.653117 kubelet[2413]: E0120 00:47:33.652535 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 20 00:47:33.665197 kubelet[2413]: I0120 00:47:33.665138 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8291a7627b19eedc11728f83f8ae425c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8291a7627b19eedc11728f83f8ae425c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:33.685668 kubelet[2413]: E0120 00:47:33.685357 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:33.697263 kubelet[2413]: E0120 00:47:33.697176 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:33.698607 kubelet[2413]: E0120 00:47:33.698457 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:33.779277 kubelet[2413]: E0120 00:47:33.779109 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Jan 20 00:47:33.870286 kubelet[2413]: I0120 00:47:33.869329 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:33.870286 kubelet[2413]: I0120 00:47:33.869608 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:47:33.870286 kubelet[2413]: I0120 00:47:33.869723 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:33.870286 kubelet[2413]: I0120 00:47:33.869754 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:33.870286 kubelet[2413]: I0120 00:47:33.869779 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:33.884371 kubelet[2413]: I0120 00:47:33.869802 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:33.899926 kubelet[2413]: I0120 00:47:33.896617 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:33.912441 kubelet[2413]: E0120 00:47:33.912327 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 20 00:47:33.992805 kubelet[2413]: E0120 00:47:33.992561 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:33.998149 kubelet[2413]: E0120 00:47:33.998116 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:34.000240 kubelet[2413]: E0120 00:47:33.999939 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:34.003642 containerd[1602]: time="2026-01-20T00:47:34.003260728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 00:47:34.003642 containerd[1602]: time="2026-01-20T00:47:34.003596755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8291a7627b19eedc11728f83f8ae425c,Namespace:kube-system,Attempt:0,}" Jan 20 00:47:34.004645 containerd[1602]: time="2026-01-20T00:47:34.004581044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 00:47:34.035595 kubelet[2413]: W0120 00:47:34.034950 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:34.035595 kubelet[2413]: E0120 00:47:34.035096 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:34.081599 kubelet[2413]: W0120 00:47:34.080760 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:34.081599 kubelet[2413]: E0120 00:47:34.080927 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:34.374106 kubelet[2413]: I0120 00:47:34.351616 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:34.413121 kubelet[2413]: E0120 00:47:34.393915 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 20 00:47:34.413121 kubelet[2413]: W0120 00:47:34.394238 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:34.413121 kubelet[2413]: E0120 00:47:34.394916 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:34.492166 kubelet[2413]: W0120 00:47:34.491874 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:34.492508 kubelet[2413]: E0120 00:47:34.492399 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:34.583178 kubelet[2413]: E0120 00:47:34.582864 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Jan 20 00:47:35.022424 kubelet[2413]: E0120 00:47:35.022235 2413 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:35.038916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126624146.mount: Deactivated successfully. Jan 20 00:47:35.060132 containerd[1602]: time="2026-01-20T00:47:35.059827291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:47:35.076782 containerd[1602]: time="2026-01-20T00:47:35.076659841Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:47:35.087909 containerd[1602]: time="2026-01-20T00:47:35.087712187Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:47:35.092663 containerd[1602]: time="2026-01-20T00:47:35.091618682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:47:35.092822 containerd[1602]: time="2026-01-20T00:47:35.092760925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:47:35.094939 containerd[1602]: time="2026-01-20T00:47:35.094855297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:47:35.099168 containerd[1602]: time="2026-01-20T00:47:35.099110132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:47:35.106165 containerd[1602]: time="2026-01-20T00:47:35.105145722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.101463257s" Jan 20 00:47:35.106165 containerd[1602]: time="2026-01-20T00:47:35.105654621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:47:35.110230 containerd[1602]: time="2026-01-20T00:47:35.110132573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.105688164s" Jan 20 00:47:35.119770 containerd[1602]: time="2026-01-20T00:47:35.119644063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.114959986s" Jan 20 00:47:35.200400 kubelet[2413]: I0120 00:47:35.199878 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:35.200629 kubelet[2413]: E0120 00:47:35.200455 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 20 00:47:35.843022 containerd[1602]: time="2026-01-20T00:47:35.840298014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:47:35.843022 containerd[1602]: time="2026-01-20T00:47:35.842111260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:47:35.843022 containerd[1602]: time="2026-01-20T00:47:35.842142479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:35.843022 containerd[1602]: time="2026-01-20T00:47:35.842342702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:35.852590 containerd[1602]: time="2026-01-20T00:47:35.852042499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:47:35.852590 containerd[1602]: time="2026-01-20T00:47:35.852186498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:47:35.852590 containerd[1602]: time="2026-01-20T00:47:35.852316832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:35.854726 containerd[1602]: time="2026-01-20T00:47:35.852469706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:35.867678 containerd[1602]: time="2026-01-20T00:47:35.865352554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:47:35.872755 kubelet[2413]: W0120 00:47:35.869760 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:35.872755 kubelet[2413]: E0120 00:47:35.869933 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:35.874278 containerd[1602]: time="2026-01-20T00:47:35.869123475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:47:35.874278 containerd[1602]: time="2026-01-20T00:47:35.869220346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:35.874278 containerd[1602]: time="2026-01-20T00:47:35.869384583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:36.256647 kubelet[2413]: E0120 00:47:36.256476 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="3.2s" Jan 20 00:47:36.257237 kubelet[2413]: W0120 00:47:36.256752 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:36.257237 kubelet[2413]: E0120 00:47:36.256824 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:36.383615 containerd[1602]: time="2026-01-20T00:47:36.383438841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"583bde1fb39b4bf3cb3deb268c04c48115055b9f439f9027842f333c3ae0305a\"" Jan 20 00:47:36.391626 kubelet[2413]: E0120 00:47:36.390843 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:36.400625 containerd[1602]: time="2026-01-20T00:47:36.400349777Z" level=info msg="CreateContainer within sandbox \"583bde1fb39b4bf3cb3deb268c04c48115055b9f439f9027842f333c3ae0305a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:47:36.447771 containerd[1602]: time="2026-01-20T00:47:36.447622949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8291a7627b19eedc11728f83f8ae425c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eb5f413654d9e083842fba09e4aa83a64e1bb0c91e5f61545401d90681e77ba\"" Jan 20 00:47:36.448267 containerd[1602]: time="2026-01-20T00:47:36.448185639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef3b922f2e05096b18aa870962f1a1791587f649a22deee76eb324ff4b6973a4\"" Jan 20 00:47:36.449025 kubelet[2413]: E0120 00:47:36.448929 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:36.451418 kubelet[2413]: E0120 00:47:36.451386 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:36.460614 containerd[1602]: time="2026-01-20T00:47:36.459607757Z" level=info msg="CreateContainer within sandbox \"1eb5f413654d9e083842fba09e4aa83a64e1bb0c91e5f61545401d90681e77ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:47:36.461777 containerd[1602]: time="2026-01-20T00:47:36.461603012Z" level=info msg="CreateContainer within sandbox \"ef3b922f2e05096b18aa870962f1a1791587f649a22deee76eb324ff4b6973a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:47:36.482754 containerd[1602]: time="2026-01-20T00:47:36.482590751Z" level=info msg="CreateContainer within sandbox \"583bde1fb39b4bf3cb3deb268c04c48115055b9f439f9027842f333c3ae0305a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa01b9ac5fa68140d0eace25a7661927e8f0d59ea51328ecab420d07ab9784c3\"" Jan 20 00:47:36.485049 containerd[1602]: time="2026-01-20T00:47:36.484924840Z" level=info msg="StartContainer for \"fa01b9ac5fa68140d0eace25a7661927e8f0d59ea51328ecab420d07ab9784c3\"" Jan 20 00:47:36.508298 containerd[1602]: time="2026-01-20T00:47:36.508108086Z" level=info msg="CreateContainer within sandbox \"1eb5f413654d9e083842fba09e4aa83a64e1bb0c91e5f61545401d90681e77ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebe6b01a923510b656333b042bd8654a47939e308528234d51a74d165a221d10\"" Jan 20 00:47:36.509219 containerd[1602]: time="2026-01-20T00:47:36.508930022Z" level=info msg="StartContainer for \"ebe6b01a923510b656333b042bd8654a47939e308528234d51a74d165a221d10\"" Jan 20 00:47:36.514346 containerd[1602]: time="2026-01-20T00:47:36.514184624Z" level=info msg="CreateContainer within sandbox \"ef3b922f2e05096b18aa870962f1a1791587f649a22deee76eb324ff4b6973a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611\"" Jan 20 00:47:36.515204 containerd[1602]: time="2026-01-20T00:47:36.515167750Z" level=info msg="StartContainer for \"5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611\"" Jan 20 00:47:36.727374 kubelet[2413]: W0120 00:47:36.727220 2413 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 20 00:47:36.727374 kubelet[2413]: E0120 00:47:36.727344 2413 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:47:36.775749 containerd[1602]: time="2026-01-20T00:47:36.774271901Z" level=info msg="StartContainer for \"fa01b9ac5fa68140d0eace25a7661927e8f0d59ea51328ecab420d07ab9784c3\" returns successfully" Jan 20 00:47:36.794217 containerd[1602]: time="2026-01-20T00:47:36.793917840Z" level=info msg="StartContainer for \"ebe6b01a923510b656333b042bd8654a47939e308528234d51a74d165a221d10\" returns successfully" Jan 20 00:47:36.809603 kubelet[2413]: I0120 00:47:36.808399 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:36.809603 kubelet[2413]: E0120 00:47:36.808853 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 20 00:47:36.838612 containerd[1602]: time="2026-01-20T00:47:36.836810539Z" level=info msg="StartContainer for \"5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611\" returns successfully" Jan 20 00:47:37.176378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547459513.mount: Deactivated successfully. Jan 20 00:47:37.553630 kubelet[2413]: E0120 00:47:37.552193 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:37.553630 kubelet[2413]: E0120 00:47:37.552706 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:37.563783 kubelet[2413]: E0120 00:47:37.561898 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:37.563783 kubelet[2413]: E0120 00:47:37.562338 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:37.580908 kubelet[2413]: E0120 00:47:37.580831 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:37.581202 kubelet[2413]: E0120 00:47:37.581146 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:38.599720 kubelet[2413]: E0120 00:47:38.596542 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:38.599720 kubelet[2413]: E0120 00:47:38.597228 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:38.603291 kubelet[2413]: E0120 00:47:38.602934 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:38.604992 kubelet[2413]: E0120 00:47:38.603418 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:40.082390 kubelet[2413]: I0120 00:47:40.081708 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:41.775799 kubelet[2413]: E0120 00:47:41.774744 2413 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:47:41.775799 kubelet[2413]: E0120 00:47:41.775404 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:43.200346 kubelet[2413]: E0120 00:47:43.199796 2413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:47:43.220047 kubelet[2413]: I0120 00:47:43.218178 2413 apiserver.go:52] "Watching apiserver" Jan 20 00:47:43.269241 kubelet[2413]: I0120 00:47:43.269168 2413 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:47:43.286595 kubelet[2413]: E0120 00:47:43.286365 2413 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c49f103ce1d7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:47:33.149547901 +0000 UTC m=+1.213854934,LastTimestamp:2026-01-20 00:47:33.149547901 +0000 UTC m=+1.213854934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:47:43.389115 kubelet[2413]: E0120 00:47:43.388890 2413 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c49f104973d3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:47:33.162728764 +0000 UTC m=+1.227035775,LastTimestamp:2026-01-20 00:47:33.162728764 +0000 UTC m=+1.227035775,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:47:43.394235 kubelet[2413]: I0120 00:47:43.393511 2413 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:47:43.394235 kubelet[2413]: E0120 00:47:43.393594 2413 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 00:47:43.474156 kubelet[2413]: I0120 00:47:43.474055 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:43.503890 kubelet[2413]: E0120 00:47:43.500029 2413 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:43.503890 kubelet[2413]: I0120 00:47:43.500142 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:43.510791 kubelet[2413]: E0120 00:47:43.509181 2413 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:43.510791 kubelet[2413]: I0120 00:47:43.509292 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:47:43.519641 kubelet[2413]: E0120 00:47:43.519017 2413 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:47:45.774404 kubelet[2413]: I0120 00:47:45.773292 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:45.797712 kubelet[2413]: E0120 00:47:45.796950 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:46.092852 kubelet[2413]: E0120 00:47:46.092471 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:46.255868 kubelet[2413]: I0120 00:47:46.255658 2413 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:46.285592 kubelet[2413]: E0120 00:47:46.285392 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:46.500888 kubelet[2413]: I0120 00:47:46.500725 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.500675178 podStartE2EDuration="1.500675178s" podCreationTimestamp="2026-01-20 00:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:47:46.500473821 +0000 UTC m=+14.564780864" watchObservedRunningTime="2026-01-20 00:47:46.500675178 +0000 UTC m=+14.564982190" Jan 20 00:47:47.100773 kubelet[2413]: E0120 00:47:47.100278 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:47.158870 kubelet[2413]: I0120 00:47:47.157334 2413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.157307457 podStartE2EDuration="1.157307457s" podCreationTimestamp="2026-01-20 00:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:47:47.15031325 +0000 UTC m=+15.214620292" watchObservedRunningTime="2026-01-20 00:47:47.157307457 +0000 UTC m=+15.221614468" Jan 20 00:47:47.697909 systemd[1]: Reloading requested from client PID 2691 ('systemctl') (unit session-9.scope)... Jan 20 00:47:47.698012 systemd[1]: Reloading... Jan 20 00:47:47.934016 zram_generator::config[2730]: No configuration found. Jan 20 00:47:48.359393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:47:48.607536 systemd[1]: Reloading finished in 908 ms. Jan 20 00:47:48.718334 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:48.772940 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:47:48.777355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:48.820860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:47:49.350738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:47:49.384855 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:47:49.544087 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:47:49.544087 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:47:49.544087 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:47:49.544087 kubelet[2786]: I0120 00:47:49.541808 2786 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:47:49.567661 kubelet[2786]: I0120 00:47:49.566898 2786 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:47:49.567661 kubelet[2786]: I0120 00:47:49.566936 2786 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:47:49.567661 kubelet[2786]: I0120 00:47:49.567395 2786 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:47:49.575681 kubelet[2786]: I0120 00:47:49.571301 2786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 00:47:49.578738 kubelet[2786]: I0120 00:47:49.578652 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:47:49.612377 kubelet[2786]: E0120 00:47:49.612137 2786 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:47:49.612377 kubelet[2786]: I0120 00:47:49.612256 2786 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:47:49.639706 kubelet[2786]: I0120 00:47:49.634754 2786 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:47:49.639706 kubelet[2786]: I0120 00:47:49.636681 2786 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:47:49.639706 kubelet[2786]: I0120 00:47:49.636740 2786 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 20 00:47:49.639706 kubelet[2786]: I0120 00:47:49.637320 2786 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:47:49.640132 kubelet[2786]: I0120 00:47:49.637340 2786 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:47:49.640132 kubelet[2786]: I0120 00:47:49.637434 2786 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:47:49.640132 kubelet[2786]: I0120 00:47:49.637755 2786 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:47:49.640132 kubelet[2786]: I0120 00:47:49.638658 2786 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:47:49.640132 kubelet[2786]: I0120 00:47:49.638694 2786 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:47:49.640132 kubelet[2786]: I0120 00:47:49.638712 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:47:49.647733 kubelet[2786]: I0120 00:47:49.644045 2786 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:47:49.647733 kubelet[2786]: I0120 00:47:49.645221 2786 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:47:49.647733 kubelet[2786]: I0120 00:47:49.646624 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:47:49.647733 kubelet[2786]: I0120 00:47:49.646731 2786 server.go:1287] "Started kubelet" Jan 20 00:47:49.652734 kubelet[2786]: I0120 00:47:49.651259 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:47:49.652734 kubelet[2786]: I0120 00:47:49.651505 2786 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:47:49.665762 kubelet[2786]: I0120 00:47:49.659894 2786 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:47:49.665762 kubelet[2786]: I0120 00:47:49.661885 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:47:49.665762 kubelet[2786]: I0120 00:47:49.662233 2786 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:47:49.665762 kubelet[2786]: I0120 00:47:49.662923 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:47:49.687072 kubelet[2786]: I0120 00:47:49.682105 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:47:49.687072 kubelet[2786]: I0120 00:47:49.682251 2786 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:47:49.687072 kubelet[2786]: I0120 00:47:49.682501 2786 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:47:49.700741 kubelet[2786]: E0120 00:47:49.698255 2786 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:47:49.704385 kubelet[2786]: I0120 00:47:49.704352 2786 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:47:49.709316 kubelet[2786]: I0120 00:47:49.707415 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:47:49.719465 kubelet[2786]: E0120 00:47:49.713427 2786 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:47:49.724365 kubelet[2786]: I0120 00:47:49.723250 2786 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:47:49.770590 kubelet[2786]: I0120 00:47:49.770466 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:47:49.778831 kubelet[2786]: I0120 00:47:49.775777 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:47:49.778831 kubelet[2786]: I0120 00:47:49.775818 2786 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:47:49.778831 kubelet[2786]: I0120 00:47:49.775851 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:47:49.778831 kubelet[2786]: I0120 00:47:49.775862 2786 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:47:49.778831 kubelet[2786]: E0120 00:47:49.775937 2786 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:47:49.876809 kubelet[2786]: E0120 00:47:49.876319 2786 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 00:47:50.151111 kubelet[2786]: E0120 00:47:50.116766 2786 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 00:47:50.170221 kubelet[2786]: I0120 00:47:50.168468 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:47:50.170221 kubelet[2786]: I0120 00:47:50.169137 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:47:50.170221 kubelet[2786]: I0120 00:47:50.169488 2786 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:47:50.179549 kubelet[2786]: I0120 00:47:50.173701 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:47:50.179549 kubelet[2786]: I0120 00:47:50.173908 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:47:50.179549 kubelet[2786]: I0120 00:47:50.174096 2786 policy_none.go:49] "None policy: Start" Jan 20 00:47:50.179549 kubelet[2786]: I0120 00:47:50.174113 2786 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:47:50.179549 kubelet[2786]: I0120 00:47:50.174134 2786 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:47:50.269654 kubelet[2786]: I0120 00:47:50.213307 2786 state_mem.go:75] "Updated machine memory state" Jan 20 00:47:50.291905 kubelet[2786]: I0120 00:47:50.280867 2786 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:47:50.291905 kubelet[2786]: I0120 00:47:50.284308 2786 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:47:50.291905 kubelet[2786]: E0120 00:47:50.292104 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:47:50.299551 kubelet[2786]: I0120 00:47:50.284335 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:47:50.299551 kubelet[2786]: I0120 00:47:50.301407 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:47:50.435420 kubelet[2786]: I0120 00:47:50.434251 2786 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:47:50.469648 kubelet[2786]: I0120 00:47:50.469505 2786 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:47:50.474751 kubelet[2786]: I0120 00:47:50.473169 2786 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:47:50.553050 kubelet[2786]: I0120 00:47:50.552944 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:50.557042 kubelet[2786]: I0120 00:47:50.553249 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.557042 kubelet[2786]: I0120 00:47:50.551561 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:47:50.595453 kubelet[2786]: E0120 00:47:50.592337 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.596387 kubelet[2786]: E0120 00:47:50.596353 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:50.622294 kubelet[2786]: I0120 00:47:50.619659 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.622294 kubelet[2786]: I0120 00:47:50.619738 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.622294 kubelet[2786]: I0120 00:47:50.619779 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.622294 kubelet[2786]: I0120 00:47:50.619869 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8291a7627b19eedc11728f83f8ae425c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8291a7627b19eedc11728f83f8ae425c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:50.622294 kubelet[2786]: I0120 00:47:50.619906 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8291a7627b19eedc11728f83f8ae425c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8291a7627b19eedc11728f83f8ae425c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:50.622799 kubelet[2786]: I0120 00:47:50.620045 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.622799 kubelet[2786]: I0120 00:47:50.620071 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:47:50.622799 kubelet[2786]: I0120 00:47:50.620098 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:47:50.622799 kubelet[2786]: I0120 00:47:50.620118 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8291a7627b19eedc11728f83f8ae425c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8291a7627b19eedc11728f83f8ae425c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:47:50.641934 kubelet[2786]: I0120 00:47:50.641627 2786 apiserver.go:52] "Watching apiserver" Jan 20 00:47:50.684082 kubelet[2786]: I0120 00:47:50.683925 2786 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:47:50.895062 kubelet[2786]: E0120 00:47:50.894381 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:50.897505 kubelet[2786]: E0120 00:47:50.897042 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:50.899331 kubelet[2786]: E0120 00:47:50.899193 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:50.975411 kubelet[2786]: I0120 00:47:50.975237 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.97516112 podStartE2EDuration="975.16112ms" podCreationTimestamp="2026-01-20 00:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:47:50.944468336 +0000 UTC m=+1.552259036" watchObservedRunningTime="2026-01-20 00:47:50.97516112 +0000 UTC m=+1.582951820" Jan 20 00:47:51.396939 kubelet[2786]: I0120 00:47:51.396616 2786 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:47:51.401025 containerd[1602]: time="2026-01-20T00:47:51.400873105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:47:51.402506 kubelet[2786]: I0120 00:47:51.401680 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:47:51.947695 kubelet[2786]: E0120 00:47:51.945364 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:51.947695 kubelet[2786]: E0120 00:47:51.946541 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:51.947695 kubelet[2786]: E0120 00:47:51.946861 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:52.328607 kubelet[2786]: I0120 00:47:52.326840 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2d9061d-6424-4c98-bd7a-38b38dbecec7-xtables-lock\") pod \"kube-proxy-d48nq\" (UID: \"c2d9061d-6424-4c98-bd7a-38b38dbecec7\") " pod="kube-system/kube-proxy-d48nq" Jan 20 00:47:52.328607 kubelet[2786]: I0120 00:47:52.327103 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2d9061d-6424-4c98-bd7a-38b38dbecec7-lib-modules\") pod \"kube-proxy-d48nq\" (UID: \"c2d9061d-6424-4c98-bd7a-38b38dbecec7\") " pod="kube-system/kube-proxy-d48nq" Jan 20 00:47:52.328607 kubelet[2786]: I0120 00:47:52.327185 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2d9061d-6424-4c98-bd7a-38b38dbecec7-kube-proxy\") pod \"kube-proxy-d48nq\" (UID: \"c2d9061d-6424-4c98-bd7a-38b38dbecec7\") " pod="kube-system/kube-proxy-d48nq" Jan 20 00:47:52.328607 kubelet[2786]: I0120 00:47:52.327222 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfvwn\" (UniqueName: \"kubernetes.io/projected/c2d9061d-6424-4c98-bd7a-38b38dbecec7-kube-api-access-gfvwn\") pod \"kube-proxy-d48nq\" (UID: \"c2d9061d-6424-4c98-bd7a-38b38dbecec7\") " pod="kube-system/kube-proxy-d48nq" Jan 20 00:47:52.559421 kubelet[2786]: E0120 00:47:52.559304 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:52.564010 containerd[1602]: time="2026-01-20T00:47:52.560511952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d48nq,Uid:c2d9061d-6424-4c98-bd7a-38b38dbecec7,Namespace:kube-system,Attempt:0,}" Jan 20 00:47:52.954032 kubelet[2786]: E0120 00:47:52.950194 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:52.954032 kubelet[2786]: E0120 00:47:52.953432 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:52.967414 containerd[1602]: time="2026-01-20T00:47:52.966134690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:47:52.967414 containerd[1602]: time="2026-01-20T00:47:52.966436867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:47:52.967414 containerd[1602]: time="2026-01-20T00:47:52.966466322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:52.967414 containerd[1602]: time="2026-01-20T00:47:52.966758862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:54.093692 containerd[1602]: time="2026-01-20T00:47:54.093643456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d48nq,Uid:c2d9061d-6424-4c98-bd7a-38b38dbecec7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c4fc4218f42ea4bdb2c08ab227b9e457777fd74254f15c894135fc612b73f24\"" Jan 20 00:47:54.101945 kubelet[2786]: E0120 00:47:54.101911 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:54.132236 containerd[1602]: time="2026-01-20T00:47:54.132184499Z" level=info msg="CreateContainer within sandbox \"4c4fc4218f42ea4bdb2c08ab227b9e457777fd74254f15c894135fc612b73f24\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:47:54.275429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118197480.mount: Deactivated successfully. Jan 20 00:47:54.323430 containerd[1602]: time="2026-01-20T00:47:54.323353844Z" level=info msg="CreateContainer within sandbox \"4c4fc4218f42ea4bdb2c08ab227b9e457777fd74254f15c894135fc612b73f24\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3acf75340daf34d4cf814f96de965e36e169765da9381cad4a7628aa820cd3c\"" Jan 20 00:47:54.326037 containerd[1602]: time="2026-01-20T00:47:54.324893361Z" level=info msg="StartContainer for \"b3acf75340daf34d4cf814f96de965e36e169765da9381cad4a7628aa820cd3c\"" Jan 20 00:47:54.852642 containerd[1602]: time="2026-01-20T00:47:54.849321325Z" level=info msg="StartContainer for \"b3acf75340daf34d4cf814f96de965e36e169765da9381cad4a7628aa820cd3c\" returns successfully" Jan 20 00:47:54.857693 kubelet[2786]: I0120 00:47:54.857527 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg9tk\" (UniqueName: \"kubernetes.io/projected/e38b1d22-43b3-4ba8-b680-689199f100c1-kube-api-access-kg9tk\") pod \"tigera-operator-7dcd859c48-shj9j\" (UID: \"e38b1d22-43b3-4ba8-b680-689199f100c1\") " pod="tigera-operator/tigera-operator-7dcd859c48-shj9j" Jan 20 00:47:54.858194 kubelet[2786]: I0120 00:47:54.858070 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e38b1d22-43b3-4ba8-b680-689199f100c1-var-lib-calico\") pod \"tigera-operator-7dcd859c48-shj9j\" (UID: \"e38b1d22-43b3-4ba8-b680-689199f100c1\") " pod="tigera-operator/tigera-operator-7dcd859c48-shj9j" Jan 20 00:47:55.059248 kubelet[2786]: E0120 00:47:55.059166 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:55.103478 containerd[1602]: time="2026-01-20T00:47:55.101669460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-shj9j,Uid:e38b1d22-43b3-4ba8-b680-689199f100c1,Namespace:tigera-operator,Attempt:0,}" Jan 20 00:47:55.105912 kubelet[2786]: I0120 00:47:55.105449 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d48nq" podStartSLOduration=3.105420901 podStartE2EDuration="3.105420901s" podCreationTimestamp="2026-01-20 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:47:55.097888032 +0000 UTC m=+5.705678753" watchObservedRunningTime="2026-01-20 00:47:55.105420901 +0000 UTC m=+5.713211601" Jan 20 00:47:55.189418 containerd[1602]: time="2026-01-20T00:47:55.189239362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:47:55.192417 containerd[1602]: time="2026-01-20T00:47:55.191824440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:47:55.192417 containerd[1602]: time="2026-01-20T00:47:55.191925146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:55.192417 containerd[1602]: time="2026-01-20T00:47:55.192216424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:47:55.419126 containerd[1602]: time="2026-01-20T00:47:55.418778170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-shj9j,Uid:e38b1d22-43b3-4ba8-b680-689199f100c1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bcf329c5d2ead16a2b4da6972ae6b5ea8e5d5e8547193ce1a3a9b7568de68604\"" Jan 20 00:47:55.447011 containerd[1602]: time="2026-01-20T00:47:55.446822995Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 00:47:57.207220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233590089.mount: Deactivated successfully. Jan 20 00:47:57.461585 kubelet[2786]: E0120 00:47:57.458581 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:57.966146 kubelet[2786]: E0120 00:47:57.965631 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:58.093379 kubelet[2786]: E0120 00:47:58.091064 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:58.093379 kubelet[2786]: E0120 00:47:58.091539 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:47:59.631730 containerd[1602]: time="2026-01-20T00:47:59.631407280Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:59.643528 containerd[1602]: time="2026-01-20T00:47:59.641887108Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 00:47:59.652611 containerd[1602]: time="2026-01-20T00:47:59.650297406Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:59.661538 containerd[1602]: time="2026-01-20T00:47:59.661375580Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:47:59.663764 containerd[1602]: time="2026-01-20T00:47:59.663643895Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.216764498s" Jan 20 00:47:59.663764 containerd[1602]: time="2026-01-20T00:47:59.663725506Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 00:47:59.676299 containerd[1602]: time="2026-01-20T00:47:59.676236682Z" level=info msg="CreateContainer within sandbox \"bcf329c5d2ead16a2b4da6972ae6b5ea8e5d5e8547193ce1a3a9b7568de68604\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 00:47:59.747989 containerd[1602]: time="2026-01-20T00:47:59.746307118Z" level=info msg="CreateContainer within sandbox \"bcf329c5d2ead16a2b4da6972ae6b5ea8e5d5e8547193ce1a3a9b7568de68604\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ebb6103ee31090d5337837b634d74c09982e3afab34e5bc78f3baeab4956dd03\"" Jan 20 00:47:59.753694 containerd[1602]: time="2026-01-20T00:47:59.750399038Z" level=info msg="StartContainer for \"ebb6103ee31090d5337837b634d74c09982e3afab34e5bc78f3baeab4956dd03\"" Jan 20 00:48:00.008232 containerd[1602]: time="2026-01-20T00:48:00.008110132Z" level=info msg="StartContainer for \"ebb6103ee31090d5337837b634d74c09982e3afab34e5bc78f3baeab4956dd03\" returns successfully" Jan 20 00:48:00.164865 kubelet[2786]: I0120 00:48:00.164731 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-shj9j" podStartSLOduration=1.929247071 podStartE2EDuration="6.164704884s" podCreationTimestamp="2026-01-20 00:47:54 +0000 UTC" firstStartedPulling="2026-01-20 00:47:55.434304341 +0000 UTC m=+6.042095031" lastFinishedPulling="2026-01-20 00:47:59.669762154 +0000 UTC m=+10.277552844" observedRunningTime="2026-01-20 00:48:00.163934491 +0000 UTC m=+10.771725181" watchObservedRunningTime="2026-01-20 00:48:00.164704884 +0000 UTC m=+10.772495574" Jan 20 00:48:00.835620 kubelet[2786]: E0120 00:48:00.835258 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:01.122901 kubelet[2786]: E0120 00:48:01.121855 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:10.543403 sudo[1810]: pam_unix(sudo:session): session closed for user root Jan 20 00:48:10.570855 sshd[1803]: pam_unix(sshd:session): session closed for user core Jan 20 00:48:10.617260 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:36656.service: Deactivated successfully. Jan 20 00:48:10.940060 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:48:10.951262 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:48:10.956694 systemd-logind[1586]: Removed session 9. Jan 20 00:48:23.103515 kubelet[2786]: I0120 00:48:23.102464 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5wm\" (UniqueName: \"kubernetes.io/projected/5b2d43d4-0873-480f-8e7a-df1d32890cce-kube-api-access-4t5wm\") pod \"calico-typha-57f8d7b9c-c47ns\" (UID: \"5b2d43d4-0873-480f-8e7a-df1d32890cce\") " pod="calico-system/calico-typha-57f8d7b9c-c47ns" Jan 20 00:48:23.103515 kubelet[2786]: I0120 00:48:23.102532 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b2d43d4-0873-480f-8e7a-df1d32890cce-typha-certs\") pod \"calico-typha-57f8d7b9c-c47ns\" (UID: \"5b2d43d4-0873-480f-8e7a-df1d32890cce\") " pod="calico-system/calico-typha-57f8d7b9c-c47ns" Jan 20 00:48:23.103515 kubelet[2786]: I0120 00:48:23.102562 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b2d43d4-0873-480f-8e7a-df1d32890cce-tigera-ca-bundle\") pod \"calico-typha-57f8d7b9c-c47ns\" (UID: \"5b2d43d4-0873-480f-8e7a-df1d32890cce\") " pod="calico-system/calico-typha-57f8d7b9c-c47ns" Jan 20 00:48:23.336765 kubelet[2786]: E0120 00:48:23.336596 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:23.344995 containerd[1602]: time="2026-01-20T00:48:23.341269138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f8d7b9c-c47ns,Uid:5b2d43d4-0873-480f-8e7a-df1d32890cce,Namespace:calico-system,Attempt:0,}" Jan 20 00:48:23.521785 kubelet[2786]: I0120 00:48:23.521591 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17c05c43-5847-414e-bbd3-e0f6f79211e4-tigera-ca-bundle\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.521785 kubelet[2786]: I0120 00:48:23.521680 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-var-lib-calico\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.521785 kubelet[2786]: I0120 00:48:23.521722 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-flexvol-driver-host\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.521785 kubelet[2786]: I0120 00:48:23.521758 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-lib-modules\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.521785 kubelet[2786]: I0120 00:48:23.521784 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-policysync\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525696 kubelet[2786]: I0120 00:48:23.521805 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-var-run-calico\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525696 kubelet[2786]: I0120 00:48:23.521826 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-xtables-lock\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525696 kubelet[2786]: I0120 00:48:23.521859 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-cni-log-dir\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525696 kubelet[2786]: I0120 00:48:23.521889 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-cni-net-dir\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525696 kubelet[2786]: I0120 00:48:23.521915 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17c05c43-5847-414e-bbd3-e0f6f79211e4-node-certs\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525938 kubelet[2786]: I0120 00:48:23.521941 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17c05c43-5847-414e-bbd3-e0f6f79211e4-cni-bin-dir\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.525938 kubelet[2786]: I0120 00:48:23.522016 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpb76\" (UniqueName: \"kubernetes.io/projected/17c05c43-5847-414e-bbd3-e0f6f79211e4-kube-api-access-zpb76\") pod \"calico-node-rmjqr\" (UID: \"17c05c43-5847-414e-bbd3-e0f6f79211e4\") " pod="calico-system/calico-node-rmjqr" Jan 20 00:48:23.560462 kubelet[2786]: E0120 00:48:23.560036 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:23.592638 containerd[1602]: time="2026-01-20T00:48:23.591901320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:48:23.592638 containerd[1602]: time="2026-01-20T00:48:23.592121879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:48:23.592638 containerd[1602]: time="2026-01-20T00:48:23.592183224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:48:23.594664 containerd[1602]: time="2026-01-20T00:48:23.594073152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:48:23.668176 kubelet[2786]: E0120 00:48:23.667932 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.668176 kubelet[2786]: W0120 00:48:23.668018 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.677141 kubelet[2786]: E0120 00:48:23.676803 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.698948 kubelet[2786]: E0120 00:48:23.693285 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.698948 kubelet[2786]: W0120 00:48:23.693439 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.698948 kubelet[2786]: E0120 00:48:23.693479 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.738046 kubelet[2786]: E0120 00:48:23.737754 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.738494 kubelet[2786]: W0120 00:48:23.738410 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.738759 kubelet[2786]: E0120 00:48:23.738732 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.739515 kubelet[2786]: I0120 00:48:23.739416 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/946b9e08-0972-42be-947f-c9b1fe484382-kubelet-dir\") pod \"csi-node-driver-ffgch\" (UID: \"946b9e08-0972-42be-947f-c9b1fe484382\") " pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:23.741418 kubelet[2786]: E0120 00:48:23.741386 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.741418 kubelet[2786]: W0120 00:48:23.741407 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.741547 kubelet[2786]: E0120 00:48:23.741512 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.744195 kubelet[2786]: E0120 00:48:23.744113 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.744284 kubelet[2786]: W0120 00:48:23.744219 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.744284 kubelet[2786]: E0120 00:48:23.744244 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.746673 kubelet[2786]: E0120 00:48:23.746348 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.746673 kubelet[2786]: W0120 00:48:23.746384 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.746673 kubelet[2786]: E0120 00:48:23.746468 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.748821 kubelet[2786]: E0120 00:48:23.746855 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.748821 kubelet[2786]: W0120 00:48:23.746870 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.748821 kubelet[2786]: E0120 00:48:23.746886 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.748821 kubelet[2786]: I0120 00:48:23.746921 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/946b9e08-0972-42be-947f-c9b1fe484382-registration-dir\") pod \"csi-node-driver-ffgch\" (UID: \"946b9e08-0972-42be-947f-c9b1fe484382\") " pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:23.750036 kubelet[2786]: E0120 00:48:23.749846 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.750036 kubelet[2786]: W0120 00:48:23.749883 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.750036 kubelet[2786]: E0120 00:48:23.750033 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.750558 kubelet[2786]: I0120 00:48:23.750509 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/946b9e08-0972-42be-947f-c9b1fe484382-socket-dir\") pod \"csi-node-driver-ffgch\" (UID: \"946b9e08-0972-42be-947f-c9b1fe484382\") " pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:23.754490 kubelet[2786]: E0120 00:48:23.751086 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.754490 kubelet[2786]: W0120 00:48:23.751115 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.754490 kubelet[2786]: E0120 00:48:23.751140 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.754490 kubelet[2786]: E0120 00:48:23.751667 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.754490 kubelet[2786]: W0120 00:48:23.751681 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.754490 kubelet[2786]: E0120 00:48:23.751703 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.754490 kubelet[2786]: E0120 00:48:23.754489 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.754900 kubelet[2786]: W0120 00:48:23.754507 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.754900 kubelet[2786]: E0120 00:48:23.754536 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.755545 kubelet[2786]: E0120 00:48:23.755479 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.755545 kubelet[2786]: W0120 00:48:23.755514 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.759919 kubelet[2786]: E0120 00:48:23.759580 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.760659 kubelet[2786]: E0120 00:48:23.760636 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.760800 kubelet[2786]: W0120 00:48:23.760738 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.760800 kubelet[2786]: E0120 00:48:23.760769 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.761234 kubelet[2786]: I0120 00:48:23.760948 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/946b9e08-0972-42be-947f-c9b1fe484382-varrun\") pod \"csi-node-driver-ffgch\" (UID: \"946b9e08-0972-42be-947f-c9b1fe484382\") " pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:23.764103 kubelet[2786]: E0120 00:48:23.764046 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.764205 kubelet[2786]: W0120 00:48:23.764129 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.764205 kubelet[2786]: E0120 00:48:23.764161 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.767374 kubelet[2786]: E0120 00:48:23.765166 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.767374 kubelet[2786]: W0120 00:48:23.765243 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.767507 kubelet[2786]: E0120 00:48:23.767476 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.768680 kubelet[2786]: E0120 00:48:23.768619 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.768772 kubelet[2786]: W0120 00:48:23.768742 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.768823 kubelet[2786]: E0120 00:48:23.768768 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.769697 kubelet[2786]: I0120 00:48:23.768897 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnnh\" (UniqueName: \"kubernetes.io/projected/946b9e08-0972-42be-947f-c9b1fe484382-kube-api-access-2vnnh\") pod \"csi-node-driver-ffgch\" (UID: \"946b9e08-0972-42be-947f-c9b1fe484382\") " pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:23.771275 kubelet[2786]: E0120 00:48:23.770225 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.771275 kubelet[2786]: W0120 00:48:23.770243 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.771275 kubelet[2786]: E0120 00:48:23.770260 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.771275 kubelet[2786]: E0120 00:48:23.771050 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.771275 kubelet[2786]: W0120 00:48:23.771064 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.771275 kubelet[2786]: E0120 00:48:23.771174 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.872876 kubelet[2786]: E0120 00:48:23.872574 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.872876 kubelet[2786]: W0120 00:48:23.872628 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.872876 kubelet[2786]: E0120 00:48:23.872664 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.878557 kubelet[2786]: E0120 00:48:23.876723 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.878557 kubelet[2786]: W0120 00:48:23.876744 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.878557 kubelet[2786]: E0120 00:48:23.876778 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.878557 kubelet[2786]: E0120 00:48:23.877868 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.878557 kubelet[2786]: W0120 00:48:23.878037 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.878557 kubelet[2786]: E0120 00:48:23.878163 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.885128 kubelet[2786]: E0120 00:48:23.884720 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.885128 kubelet[2786]: W0120 00:48:23.884747 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.885784 kubelet[2786]: E0120 00:48:23.885705 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.886123 kubelet[2786]: W0120 00:48:23.885884 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.889001 kubelet[2786]: E0120 00:48:23.888884 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.889090 kubelet[2786]: E0120 00:48:23.889035 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.889300 kubelet[2786]: E0120 00:48:23.889276 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.890057 kubelet[2786]: W0120 00:48:23.889491 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.892478 kubelet[2786]: E0120 00:48:23.891425 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.892926 kubelet[2786]: E0120 00:48:23.892835 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.892926 kubelet[2786]: W0120 00:48:23.892859 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.893375 kubelet[2786]: E0120 00:48:23.893208 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.893825 kubelet[2786]: E0120 00:48:23.893808 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.894071 kubelet[2786]: W0120 00:48:23.893912 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.894205 kubelet[2786]: E0120 00:48:23.894182 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.898676 kubelet[2786]: E0120 00:48:23.896116 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.903046 kubelet[2786]: W0120 00:48:23.902523 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.903046 kubelet[2786]: E0120 00:48:23.902725 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.906660 kubelet[2786]: E0120 00:48:23.906561 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.907138 kubelet[2786]: W0120 00:48:23.906774 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.907568 kubelet[2786]: E0120 00:48:23.907544 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.916473 containerd[1602]: time="2026-01-20T00:48:23.913406556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f8d7b9c-c47ns,Uid:5b2d43d4-0873-480f-8e7a-df1d32890cce,Namespace:calico-system,Attempt:0,} returns sandbox id \"431e81fd10e2be2e9e5cf83ec8ed658e93fb337545479e42a96649e11e9f8cfe\"" Jan 20 00:48:23.916661 kubelet[2786]: E0120 00:48:23.913640 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.916661 kubelet[2786]: W0120 00:48:23.913663 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.916661 kubelet[2786]: E0120 00:48:23.914896 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.916661 kubelet[2786]: E0120 00:48:23.915483 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.916661 kubelet[2786]: W0120 00:48:23.915500 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.916661 kubelet[2786]: E0120 00:48:23.915809 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.916661 kubelet[2786]: W0120 00:48:23.915821 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.916661 kubelet[2786]: E0120 00:48:23.916153 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.916661 kubelet[2786]: E0120 00:48:23.916180 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.919694 kubelet[2786]: E0120 00:48:23.916300 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.919694 kubelet[2786]: W0120 00:48:23.918847 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.919694 kubelet[2786]: E0120 00:48:23.919556 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.923456 kubelet[2786]: E0120 00:48:23.922271 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.923456 kubelet[2786]: W0120 00:48:23.922293 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.923456 kubelet[2786]: E0120 00:48:23.922481 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.924489 kubelet[2786]: E0120 00:48:23.924454 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:23.927555 kubelet[2786]: E0120 00:48:23.926031 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.927555 kubelet[2786]: W0120 00:48:23.926053 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.927555 kubelet[2786]: E0120 00:48:23.926267 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.927868 kubelet[2786]: E0120 00:48:23.927806 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.927868 kubelet[2786]: W0120 00:48:23.927851 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.928039 kubelet[2786]: E0120 00:48:23.928015 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.928116 containerd[1602]: time="2026-01-20T00:48:23.927938743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 00:48:23.928547 kubelet[2786]: E0120 00:48:23.928487 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.928547 kubelet[2786]: W0120 00:48:23.928527 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.930414 kubelet[2786]: E0120 00:48:23.929003 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.930414 kubelet[2786]: W0120 00:48:23.929035 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.930505 kubelet[2786]: E0120 00:48:23.930485 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.930539 kubelet[2786]: E0120 00:48:23.930528 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.931033 kubelet[2786]: E0120 00:48:23.930918 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.931110 kubelet[2786]: W0120 00:48:23.931082 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.931350 kubelet[2786]: E0120 00:48:23.931277 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.931738 kubelet[2786]: E0120 00:48:23.931686 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.931738 kubelet[2786]: W0120 00:48:23.931726 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.933565 kubelet[2786]: E0120 00:48:23.933528 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.933565 kubelet[2786]: W0120 00:48:23.933560 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.933678 kubelet[2786]: E0120 00:48:23.933626 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.933678 kubelet[2786]: E0120 00:48:23.933632 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.934042 kubelet[2786]: E0120 00:48:23.933949 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.934042 kubelet[2786]: W0120 00:48:23.934036 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.934125 kubelet[2786]: E0120 00:48:23.934062 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.934653 kubelet[2786]: E0120 00:48:23.934618 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.934653 kubelet[2786]: W0120 00:48:23.934635 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.934653 kubelet[2786]: E0120 00:48:23.934653 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.939706 kubelet[2786]: E0120 00:48:23.939569 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.939706 kubelet[2786]: W0120 00:48:23.939598 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.939706 kubelet[2786]: E0120 00:48:23.939625 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.977855 kubelet[2786]: E0120 00:48:23.977776 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:23.977855 kubelet[2786]: W0120 00:48:23.977834 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:23.978154 kubelet[2786]: E0120 00:48:23.977866 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:23.995573 kubelet[2786]: E0120 00:48:23.994779 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:23.995713 containerd[1602]: time="2026-01-20T00:48:23.995595854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rmjqr,Uid:17c05c43-5847-414e-bbd3-e0f6f79211e4,Namespace:calico-system,Attempt:0,}" Jan 20 00:48:24.139220 containerd[1602]: time="2026-01-20T00:48:24.131771110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:48:24.139220 containerd[1602]: time="2026-01-20T00:48:24.132053466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:48:24.139220 containerd[1602]: time="2026-01-20T00:48:24.132162027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:48:24.139220 containerd[1602]: time="2026-01-20T00:48:24.132428712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:48:24.557787 containerd[1602]: time="2026-01-20T00:48:24.557437775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rmjqr,Uid:17c05c43-5847-414e-bbd3-e0f6f79211e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\"" Jan 20 00:48:24.562823 kubelet[2786]: E0120 00:48:24.559196 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:24.781110 kubelet[2786]: E0120 00:48:24.778929 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:25.441561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153634220.mount: Deactivated successfully. Jan 20 00:48:26.779662 kubelet[2786]: E0120 00:48:26.779335 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:28.315635 containerd[1602]: time="2026-01-20T00:48:28.311624287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:28.319176 containerd[1602]: time="2026-01-20T00:48:28.319069306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 00:48:28.333736 containerd[1602]: time="2026-01-20T00:48:28.333615004Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:28.339661 containerd[1602]: time="2026-01-20T00:48:28.339556798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:28.345187 containerd[1602]: time="2026-01-20T00:48:28.345075148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.417001624s" Jan 20 00:48:28.345187 containerd[1602]: time="2026-01-20T00:48:28.345158303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 00:48:28.350466 containerd[1602]: time="2026-01-20T00:48:28.348167424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:48:28.413786 containerd[1602]: time="2026-01-20T00:48:28.412814960Z" level=info msg="CreateContainer within sandbox \"431e81fd10e2be2e9e5cf83ec8ed658e93fb337545479e42a96649e11e9f8cfe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 00:48:28.464552 containerd[1602]: time="2026-01-20T00:48:28.464419121Z" level=info msg="CreateContainer within sandbox \"431e81fd10e2be2e9e5cf83ec8ed658e93fb337545479e42a96649e11e9f8cfe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"989f8debc02bdf5b1e3e33df54683bfe8a5007aa14a1e036261e698ab0d0f8a9\"" Jan 20 00:48:28.471509 containerd[1602]: time="2026-01-20T00:48:28.465644789Z" level=info msg="StartContainer for \"989f8debc02bdf5b1e3e33df54683bfe8a5007aa14a1e036261e698ab0d0f8a9\"" Jan 20 00:48:28.714367 containerd[1602]: time="2026-01-20T00:48:28.713747184Z" level=info msg="StartContainer for \"989f8debc02bdf5b1e3e33df54683bfe8a5007aa14a1e036261e698ab0d0f8a9\" returns successfully" Jan 20 00:48:28.778543 kubelet[2786]: E0120 00:48:28.778374 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:28.928130 kubelet[2786]: E0120 00:48:28.925664 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:28.956244 kubelet[2786]: E0120 00:48:28.956197 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.963529 kubelet[2786]: W0120 00:48:28.963233 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.963948 kubelet[2786]: E0120 00:48:28.963667 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.972907 kubelet[2786]: E0120 00:48:28.964869 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.972907 kubelet[2786]: W0120 00:48:28.964901 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.972907 kubelet[2786]: E0120 00:48:28.964931 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.972907 kubelet[2786]: E0120 00:48:28.966629 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.972907 kubelet[2786]: W0120 00:48:28.966655 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.972907 kubelet[2786]: E0120 00:48:28.966691 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.972907 kubelet[2786]: E0120 00:48:28.967437 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.972907 kubelet[2786]: W0120 00:48:28.967453 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.972907 kubelet[2786]: E0120 00:48:28.967475 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.975348 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.985880 kubelet[2786]: W0120 00:48:28.975462 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.975587 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.978703 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.985880 kubelet[2786]: W0120 00:48:28.978725 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.978781 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.979403 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.985880 kubelet[2786]: W0120 00:48:28.979425 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.979446 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.985880 kubelet[2786]: E0120 00:48:28.982371 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.987127 kubelet[2786]: W0120 00:48:28.982391 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.987127 kubelet[2786]: E0120 00:48:28.982415 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.987127 kubelet[2786]: E0120 00:48:28.987038 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.987127 kubelet[2786]: W0120 00:48:28.987066 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.987127 kubelet[2786]: E0120 00:48:28.987096 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.988130 kubelet[2786]: E0120 00:48:28.987897 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.988130 kubelet[2786]: W0120 00:48:28.987922 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.988130 kubelet[2786]: E0120 00:48:28.987942 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.990506 kubelet[2786]: E0120 00:48:28.990463 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.991404 kubelet[2786]: W0120 00:48:28.990644 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.991404 kubelet[2786]: E0120 00:48:28.990695 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.992167 kubelet[2786]: E0120 00:48:28.992088 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.992167 kubelet[2786]: W0120 00:48:28.992128 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.992167 kubelet[2786]: E0120 00:48:28.992156 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.994144 kubelet[2786]: E0120 00:48:28.993861 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.994452 kubelet[2786]: W0120 00:48:28.994256 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.994452 kubelet[2786]: E0120 00:48:28.994353 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:28.998750 kubelet[2786]: E0120 00:48:28.998722 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:28.999089 kubelet[2786]: W0120 00:48:28.998885 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:28.999089 kubelet[2786]: E0120 00:48:28.998924 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.002677 kubelet[2786]: E0120 00:48:29.002648 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.005200 kubelet[2786]: W0120 00:48:29.002824 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.005200 kubelet[2786]: E0120 00:48:29.002863 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.009113 kubelet[2786]: E0120 00:48:29.009080 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.009897 kubelet[2786]: W0120 00:48:29.009643 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.009897 kubelet[2786]: E0120 00:48:29.009687 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.011325 kubelet[2786]: I0120 00:48:29.010708 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57f8d7b9c-c47ns" podStartSLOduration=2.5913379819999998 podStartE2EDuration="7.010690853s" podCreationTimestamp="2026-01-20 00:48:22 +0000 UTC" firstStartedPulling="2026-01-20 00:48:23.927539712 +0000 UTC m=+34.535330412" lastFinishedPulling="2026-01-20 00:48:28.346892594 +0000 UTC m=+38.954683283" observedRunningTime="2026-01-20 00:48:29.010445887 +0000 UTC m=+39.618236587" watchObservedRunningTime="2026-01-20 00:48:29.010690853 +0000 UTC m=+39.618481544" Jan 20 00:48:29.012022 kubelet[2786]: E0120 00:48:29.011936 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.012144 kubelet[2786]: W0120 00:48:29.012124 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.012583 kubelet[2786]: E0120 00:48:29.012271 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.013404 kubelet[2786]: E0120 00:48:29.013384 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.013567 kubelet[2786]: W0120 00:48:29.013501 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.014020 kubelet[2786]: E0120 00:48:29.013931 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.014811 kubelet[2786]: E0120 00:48:29.014603 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.014811 kubelet[2786]: W0120 00:48:29.014623 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.014811 kubelet[2786]: E0120 00:48:29.014648 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.016039 kubelet[2786]: E0120 00:48:29.015771 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.016039 kubelet[2786]: W0120 00:48:29.015789 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.016039 kubelet[2786]: E0120 00:48:29.015947 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.020210 kubelet[2786]: E0120 00:48:29.019617 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.026042 kubelet[2786]: W0120 00:48:29.019899 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.027181 kubelet[2786]: E0120 00:48:29.026264 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.031843 kubelet[2786]: E0120 00:48:29.030447 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.031843 kubelet[2786]: W0120 00:48:29.031382 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.031843 kubelet[2786]: E0120 00:48:29.031548 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.033394 kubelet[2786]: E0120 00:48:29.032610 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.033394 kubelet[2786]: W0120 00:48:29.032636 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.034402 kubelet[2786]: E0120 00:48:29.033685 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.034886 kubelet[2786]: E0120 00:48:29.034763 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.035581 kubelet[2786]: W0120 00:48:29.035146 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.035705 kubelet[2786]: E0120 00:48:29.035682 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.040313 kubelet[2786]: E0120 00:48:29.039784 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.040313 kubelet[2786]: W0120 00:48:29.039810 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.041025 kubelet[2786]: E0120 00:48:29.040671 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.050537 kubelet[2786]: E0120 00:48:29.048670 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.050537 kubelet[2786]: W0120 00:48:29.048716 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.052112 kubelet[2786]: E0120 00:48:29.051792 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.064560 kubelet[2786]: E0120 00:48:29.058189 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.064560 kubelet[2786]: W0120 00:48:29.058228 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.067783 kubelet[2786]: E0120 00:48:29.067420 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.076261 kubelet[2786]: E0120 00:48:29.075614 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.076261 kubelet[2786]: W0120 00:48:29.075661 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.076261 kubelet[2786]: E0120 00:48:29.075703 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.086650 kubelet[2786]: E0120 00:48:29.084507 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.086650 kubelet[2786]: W0120 00:48:29.084550 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.086650 kubelet[2786]: E0120 00:48:29.084600 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.086650 kubelet[2786]: E0120 00:48:29.085711 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.086650 kubelet[2786]: W0120 00:48:29.085739 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.086650 kubelet[2786]: E0120 00:48:29.085825 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.086650 kubelet[2786]: E0120 00:48:29.086477 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.086650 kubelet[2786]: W0120 00:48:29.086496 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.086650 kubelet[2786]: E0120 00:48:29.086520 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.087665 kubelet[2786]: E0120 00:48:29.087646 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.088026 kubelet[2786]: W0120 00:48:29.087744 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.088026 kubelet[2786]: E0120 00:48:29.087770 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.092751 kubelet[2786]: E0120 00:48:29.092715 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:29.093717 kubelet[2786]: W0120 00:48:29.093686 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:29.093921 kubelet[2786]: E0120 00:48:29.093901 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:29.663447 containerd[1602]: time="2026-01-20T00:48:29.663325206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:29.664605 containerd[1602]: time="2026-01-20T00:48:29.664549100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 00:48:29.668426 containerd[1602]: time="2026-01-20T00:48:29.666780460Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:29.677033 containerd[1602]: time="2026-01-20T00:48:29.676879101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:29.680314 containerd[1602]: time="2026-01-20T00:48:29.678274605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.33004212s" Jan 20 00:48:29.680438 containerd[1602]: time="2026-01-20T00:48:29.680275906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 00:48:29.697775 containerd[1602]: time="2026-01-20T00:48:29.695170987Z" level=info msg="CreateContainer within sandbox \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:48:29.769942 containerd[1602]: time="2026-01-20T00:48:29.769620981Z" level=info msg="CreateContainer within sandbox \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4da311550f6c6a91f9e608f36b2b9674ed799fafe2035a1476d6f45a561c59aa\"" Jan 20 00:48:29.774390 containerd[1602]: time="2026-01-20T00:48:29.770578161Z" level=info msg="StartContainer for \"4da311550f6c6a91f9e608f36b2b9674ed799fafe2035a1476d6f45a561c59aa\"" Jan 20 00:48:29.936717 kubelet[2786]: E0120 00:48:29.935815 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:30.044458 kubelet[2786]: E0120 00:48:30.041853 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.044458 kubelet[2786]: W0120 00:48:30.041894 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.044458 kubelet[2786]: E0120 00:48:30.041934 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.055032 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.071448 kubelet[2786]: W0120 00:48:30.055087 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.055125 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.056704 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.071448 kubelet[2786]: W0120 00:48:30.056722 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.056749 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.057324 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.071448 kubelet[2786]: W0120 00:48:30.057345 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.057370 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.071448 kubelet[2786]: E0120 00:48:30.058132 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.071950 kubelet[2786]: W0120 00:48:30.058148 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.071950 kubelet[2786]: E0120 00:48:30.058169 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.071950 kubelet[2786]: E0120 00:48:30.058549 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.071950 kubelet[2786]: W0120 00:48:30.058563 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.071950 kubelet[2786]: E0120 00:48:30.058582 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.071950 kubelet[2786]: E0120 00:48:30.058927 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.071950 kubelet[2786]: W0120 00:48:30.058942 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.071950 kubelet[2786]: E0120 00:48:30.059019 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.072405 kubelet[2786]: E0120 00:48:30.072191 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.072405 kubelet[2786]: W0120 00:48:30.072220 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.072405 kubelet[2786]: E0120 00:48:30.072259 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.075225 kubelet[2786]: E0120 00:48:30.075194 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.075777 kubelet[2786]: W0120 00:48:30.075437 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.075777 kubelet[2786]: E0120 00:48:30.075476 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.076266 kubelet[2786]: E0120 00:48:30.076176 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.076266 kubelet[2786]: W0120 00:48:30.076198 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.076662 kubelet[2786]: E0120 00:48:30.076455 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.085266 kubelet[2786]: E0120 00:48:30.085227 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.085755 kubelet[2786]: W0120 00:48:30.085479 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.085755 kubelet[2786]: E0120 00:48:30.085528 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.086560 kubelet[2786]: E0120 00:48:30.086323 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.086560 kubelet[2786]: W0120 00:48:30.086349 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.086560 kubelet[2786]: E0120 00:48:30.086375 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.087587 kubelet[2786]: E0120 00:48:30.087565 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.087763 kubelet[2786]: W0120 00:48:30.087740 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.087862 kubelet[2786]: E0120 00:48:30.087841 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.088642 kubelet[2786]: E0120 00:48:30.088620 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.088922 kubelet[2786]: W0120 00:48:30.088722 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.088922 kubelet[2786]: E0120 00:48:30.088751 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.089799 kubelet[2786]: E0120 00:48:30.089778 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.090607 kubelet[2786]: W0120 00:48:30.089878 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.090607 kubelet[2786]: E0120 00:48:30.089905 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.105520 kubelet[2786]: E0120 00:48:30.101584 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.105520 kubelet[2786]: W0120 00:48:30.101621 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.105520 kubelet[2786]: E0120 00:48:30.101656 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.105520 kubelet[2786]: E0120 00:48:30.104125 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.105520 kubelet[2786]: W0120 00:48:30.104145 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.105520 kubelet[2786]: E0120 00:48:30.104183 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.105520 kubelet[2786]: E0120 00:48:30.105222 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.105520 kubelet[2786]: W0120 00:48:30.105239 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.111435 kubelet[2786]: E0120 00:48:30.106503 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.111435 kubelet[2786]: E0120 00:48:30.106857 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.111435 kubelet[2786]: W0120 00:48:30.106872 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.111610 kubelet[2786]: E0120 00:48:30.111433 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.118853 kubelet[2786]: E0120 00:48:30.118688 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.118853 kubelet[2786]: W0120 00:48:30.118724 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.120776 kubelet[2786]: E0120 00:48:30.118926 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.120776 kubelet[2786]: E0120 00:48:30.120417 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.120776 kubelet[2786]: W0120 00:48:30.120440 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.120776 kubelet[2786]: E0120 00:48:30.120541 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.120926 kubelet[2786]: E0120 00:48:30.120852 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.120926 kubelet[2786]: W0120 00:48:30.120865 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.121118 kubelet[2786]: E0120 00:48:30.121011 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.121777 kubelet[2786]: E0120 00:48:30.121700 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.121777 kubelet[2786]: W0120 00:48:30.121750 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.121877 kubelet[2786]: E0120 00:48:30.121832 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.124406 kubelet[2786]: E0120 00:48:30.122164 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.124406 kubelet[2786]: W0120 00:48:30.122180 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.124406 kubelet[2786]: E0120 00:48:30.122248 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.124406 kubelet[2786]: E0120 00:48:30.122628 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.124406 kubelet[2786]: W0120 00:48:30.122642 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.124406 kubelet[2786]: E0120 00:48:30.122715 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.124406 kubelet[2786]: E0120 00:48:30.123096 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.124406 kubelet[2786]: W0120 00:48:30.123110 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.124406 kubelet[2786]: E0120 00:48:30.123130 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.130190 kubelet[2786]: E0120 00:48:30.128769 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.130190 kubelet[2786]: W0120 00:48:30.128798 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.130190 kubelet[2786]: E0120 00:48:30.128998 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.130190 kubelet[2786]: E0120 00:48:30.129710 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.130190 kubelet[2786]: W0120 00:48:30.129723 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.130190 kubelet[2786]: E0120 00:48:30.129868 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.132747 kubelet[2786]: E0120 00:48:30.131262 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.136268 kubelet[2786]: W0120 00:48:30.135419 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.136268 kubelet[2786]: E0120 00:48:30.135644 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.136268 kubelet[2786]: E0120 00:48:30.136263 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.143046 kubelet[2786]: W0120 00:48:30.136278 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.143405 kubelet[2786]: E0120 00:48:30.143321 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.147416 kubelet[2786]: E0120 00:48:30.146254 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.148474 kubelet[2786]: W0120 00:48:30.146278 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.148932 kubelet[2786]: E0120 00:48:30.148903 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.154498 kubelet[2786]: E0120 00:48:30.152716 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.154498 kubelet[2786]: W0120 00:48:30.153241 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.165119 kubelet[2786]: E0120 00:48:30.165047 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.167893 kubelet[2786]: E0120 00:48:30.167778 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:48:30.167893 kubelet[2786]: W0120 00:48:30.167808 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:48:30.167893 kubelet[2786]: E0120 00:48:30.167839 2786 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:48:30.220799 containerd[1602]: time="2026-01-20T00:48:30.220124193Z" level=info msg="StartContainer for \"4da311550f6c6a91f9e608f36b2b9674ed799fafe2035a1476d6f45a561c59aa\" returns successfully" Jan 20 00:48:30.427328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4da311550f6c6a91f9e608f36b2b9674ed799fafe2035a1476d6f45a561c59aa-rootfs.mount: Deactivated successfully. Jan 20 00:48:30.562853 containerd[1602]: time="2026-01-20T00:48:30.560012268Z" level=info msg="shim disconnected" id=4da311550f6c6a91f9e608f36b2b9674ed799fafe2035a1476d6f45a561c59aa namespace=k8s.io Jan 20 00:48:30.562853 containerd[1602]: time="2026-01-20T00:48:30.560208583Z" level=warning msg="cleaning up after shim disconnected" id=4da311550f6c6a91f9e608f36b2b9674ed799fafe2035a1476d6f45a561c59aa namespace=k8s.io Jan 20 00:48:30.562853 containerd[1602]: time="2026-01-20T00:48:30.560252565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:48:30.777772 kubelet[2786]: E0120 00:48:30.777704 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:30.949635 kubelet[2786]: E0120 00:48:30.945871 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:30.949635 kubelet[2786]: E0120 00:48:30.946862 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:30.951263 containerd[1602]: time="2026-01-20T00:48:30.948683256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:48:31.954708 kubelet[2786]: E0120 00:48:31.952108 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:32.777519 kubelet[2786]: E0120 00:48:32.776844 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:34.809819 kubelet[2786]: E0120 00:48:34.804829 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:36.777547 kubelet[2786]: E0120 00:48:36.777434 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:37.243684 containerd[1602]: time="2026-01-20T00:48:37.243501809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 00:48:37.243684 containerd[1602]: time="2026-01-20T00:48:37.243513017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:37.249229 containerd[1602]: time="2026-01-20T00:48:37.249145317Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:37.260422 containerd[1602]: time="2026-01-20T00:48:37.258801781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:48:37.260675 containerd[1602]: time="2026-01-20T00:48:37.260596369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.311863993s" Jan 20 00:48:37.260675 containerd[1602]: time="2026-01-20T00:48:37.260665226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 00:48:37.267937 containerd[1602]: time="2026-01-20T00:48:37.266662623Z" level=info msg="CreateContainer within sandbox \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:48:37.319654 containerd[1602]: time="2026-01-20T00:48:37.319454114Z" level=info msg="CreateContainer within sandbox \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a9bebfaf6f6018b71b0bc6a6aa0d5319fe84b18178142bba73ca61a235d8280e\"" Jan 20 00:48:37.324735 containerd[1602]: time="2026-01-20T00:48:37.324654218Z" level=info msg="StartContainer for \"a9bebfaf6f6018b71b0bc6a6aa0d5319fe84b18178142bba73ca61a235d8280e\"" Jan 20 00:48:37.513586 containerd[1602]: time="2026-01-20T00:48:37.513351321Z" level=info msg="StartContainer for \"a9bebfaf6f6018b71b0bc6a6aa0d5319fe84b18178142bba73ca61a235d8280e\" returns successfully" Jan 20 00:48:38.003790 kubelet[2786]: E0120 00:48:38.003704 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:38.779507 kubelet[2786]: E0120 00:48:38.777343 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:39.018679 kubelet[2786]: E0120 00:48:39.015889 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:39.732749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9bebfaf6f6018b71b0bc6a6aa0d5319fe84b18178142bba73ca61a235d8280e-rootfs.mount: Deactivated successfully. Jan 20 00:48:39.745832 containerd[1602]: time="2026-01-20T00:48:39.745713139Z" level=info msg="shim disconnected" id=a9bebfaf6f6018b71b0bc6a6aa0d5319fe84b18178142bba73ca61a235d8280e namespace=k8s.io Jan 20 00:48:39.745832 containerd[1602]: time="2026-01-20T00:48:39.745799811Z" level=warning msg="cleaning up after shim disconnected" id=a9bebfaf6f6018b71b0bc6a6aa0d5319fe84b18178142bba73ca61a235d8280e namespace=k8s.io Jan 20 00:48:39.745832 containerd[1602]: time="2026-01-20T00:48:39.745815850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:48:39.774950 kubelet[2786]: I0120 00:48:39.774879 2786 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:48:39.965547 kubelet[2786]: I0120 00:48:39.961409 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a-config\") pod \"goldmane-666569f655-vmzpv\" (UID: \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\") " pod="calico-system/goldmane-666569f655-vmzpv" Jan 20 00:48:39.965547 kubelet[2786]: I0120 00:48:39.961484 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5-config-volume\") pod \"coredns-668d6bf9bc-xqh9g\" (UID: \"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5\") " pod="kube-system/coredns-668d6bf9bc-xqh9g" Jan 20 00:48:39.965547 kubelet[2786]: I0120 00:48:39.961512 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsm2\" (UniqueName: \"kubernetes.io/projected/7442950d-347c-4ccb-839f-bbcef74b512f-kube-api-access-fpsm2\") pod \"calico-apiserver-7748477466-xqhsk\" (UID: \"7442950d-347c-4ccb-839f-bbcef74b512f\") " pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" Jan 20 00:48:39.965547 kubelet[2786]: I0120 00:48:39.961546 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7442950d-347c-4ccb-839f-bbcef74b512f-calico-apiserver-certs\") pod \"calico-apiserver-7748477466-xqhsk\" (UID: \"7442950d-347c-4ccb-839f-bbcef74b512f\") " pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" Jan 20 00:48:39.965547 kubelet[2786]: I0120 00:48:39.961575 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-ca-bundle\") pod \"whisker-57ccb4848f-ng25j\" (UID: \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\") " pod="calico-system/whisker-57ccb4848f-ng25j" Jan 20 00:48:39.968709 kubelet[2786]: I0120 00:48:39.961598 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsnnd\" (UniqueName: \"kubernetes.io/projected/9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a-kube-api-access-qsnnd\") pod \"goldmane-666569f655-vmzpv\" (UID: \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\") " pod="calico-system/goldmane-666569f655-vmzpv" Jan 20 00:48:39.968709 kubelet[2786]: I0120 00:48:39.961622 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm69\" (UniqueName: \"kubernetes.io/projected/bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5-kube-api-access-sqm69\") pod \"coredns-668d6bf9bc-xqh9g\" (UID: \"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5\") " pod="kube-system/coredns-668d6bf9bc-xqh9g" Jan 20 00:48:39.968709 kubelet[2786]: I0120 00:48:39.961653 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crp97\" (UniqueName: \"kubernetes.io/projected/1f255c2e-3546-405d-a567-940c6cad406e-kube-api-access-crp97\") pod \"coredns-668d6bf9bc-h28hk\" (UID: \"1f255c2e-3546-405d-a567-940c6cad406e\") " pod="kube-system/coredns-668d6bf9bc-h28hk" Jan 20 00:48:39.968709 kubelet[2786]: I0120 00:48:39.961675 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a-goldmane-ca-bundle\") pod \"goldmane-666569f655-vmzpv\" (UID: \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\") " pod="calico-system/goldmane-666569f655-vmzpv" Jan 20 00:48:39.968709 kubelet[2786]: I0120 00:48:39.961705 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/303ab104-f18e-4de9-832d-feef41e44244-calico-apiserver-certs\") pod \"calico-apiserver-7ddd4777cd-jcj86\" (UID: \"303ab104-f18e-4de9-832d-feef41e44244\") " pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" Jan 20 00:48:39.970686 kubelet[2786]: I0120 00:48:39.961730 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9pl\" (UniqueName: \"kubernetes.io/projected/303ab104-f18e-4de9-832d-feef41e44244-kube-api-access-wh9pl\") pod \"calico-apiserver-7ddd4777cd-jcj86\" (UID: \"303ab104-f18e-4de9-832d-feef41e44244\") " pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" Jan 20 00:48:39.970686 kubelet[2786]: I0120 00:48:39.961756 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-backend-key-pair\") pod \"whisker-57ccb4848f-ng25j\" (UID: \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\") " pod="calico-system/whisker-57ccb4848f-ng25j" Jan 20 00:48:39.970686 kubelet[2786]: I0120 00:48:39.961780 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f255c2e-3546-405d-a567-940c6cad406e-config-volume\") pod \"coredns-668d6bf9bc-h28hk\" (UID: \"1f255c2e-3546-405d-a567-940c6cad406e\") " pod="kube-system/coredns-668d6bf9bc-h28hk" Jan 20 00:48:39.970686 kubelet[2786]: I0120 00:48:39.961804 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a-goldmane-key-pair\") pod \"goldmane-666569f655-vmzpv\" (UID: \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\") " pod="calico-system/goldmane-666569f655-vmzpv" Jan 20 00:48:39.970686 kubelet[2786]: I0120 00:48:39.961825 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvsw\" (UniqueName: \"kubernetes.io/projected/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-kube-api-access-8fvsw\") pod \"whisker-57ccb4848f-ng25j\" (UID: \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\") " pod="calico-system/whisker-57ccb4848f-ng25j" Jan 20 00:48:40.049091 kubelet[2786]: E0120 00:48:40.042030 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:40.061573 containerd[1602]: time="2026-01-20T00:48:40.061414283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:48:40.063032 kubelet[2786]: I0120 00:48:40.062194 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6f27543-10cf-4ae1-9e7a-a66dba01cb01-tigera-ca-bundle\") pod \"calico-kube-controllers-55cdf5b57-92x4l\" (UID: \"c6f27543-10cf-4ae1-9e7a-a66dba01cb01\") " pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" Jan 20 00:48:40.068939 kubelet[2786]: I0120 00:48:40.064183 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbzsr\" (UniqueName: \"kubernetes.io/projected/c6f27543-10cf-4ae1-9e7a-a66dba01cb01-kube-api-access-wbzsr\") pod \"calico-kube-controllers-55cdf5b57-92x4l\" (UID: \"c6f27543-10cf-4ae1-9e7a-a66dba01cb01\") " pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" Jan 20 00:48:40.068939 kubelet[2786]: I0120 00:48:40.064256 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e16703d-6774-4dbd-a448-684d9c6307e4-calico-apiserver-certs\") pod \"calico-apiserver-7ddd4777cd-f4nqr\" (UID: \"7e16703d-6774-4dbd-a448-684d9c6307e4\") " pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" Jan 20 00:48:40.068939 kubelet[2786]: I0120 00:48:40.064440 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkh4h\" (UniqueName: \"kubernetes.io/projected/7e16703d-6774-4dbd-a448-684d9c6307e4-kube-api-access-gkh4h\") pod \"calico-apiserver-7ddd4777cd-f4nqr\" (UID: \"7e16703d-6774-4dbd-a448-684d9c6307e4\") " pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" Jan 20 00:48:40.216544 containerd[1602]: time="2026-01-20T00:48:40.214483818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-jcj86,Uid:303ab104-f18e-4de9-832d-feef41e44244,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:48:40.253722 containerd[1602]: time="2026-01-20T00:48:40.252364141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmzpv,Uid:9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a,Namespace:calico-system,Attempt:0,}" Jan 20 00:48:40.275457 kubelet[2786]: E0120 00:48:40.275324 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:40.277174 containerd[1602]: time="2026-01-20T00:48:40.276427797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xqh9g,Uid:bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5,Namespace:kube-system,Attempt:0,}" Jan 20 00:48:40.277383 containerd[1602]: time="2026-01-20T00:48:40.277349131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57ccb4848f-ng25j,Uid:a8a518ab-c87e-46fd-bf3c-323ce2b95b5f,Namespace:calico-system,Attempt:0,}" Jan 20 00:48:40.308584 kubelet[2786]: E0120 00:48:40.304781 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:48:40.336029 containerd[1602]: time="2026-01-20T00:48:40.331075066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-f4nqr,Uid:7e16703d-6774-4dbd-a448-684d9c6307e4,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:48:40.340385 containerd[1602]: time="2026-01-20T00:48:40.337462080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h28hk,Uid:1f255c2e-3546-405d-a567-940c6cad406e,Namespace:kube-system,Attempt:0,}" Jan 20 00:48:40.349931 containerd[1602]: time="2026-01-20T00:48:40.349165504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7748477466-xqhsk,Uid:7442950d-347c-4ccb-839f-bbcef74b512f,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:48:40.356806 containerd[1602]: time="2026-01-20T00:48:40.353683980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdf5b57-92x4l,Uid:c6f27543-10cf-4ae1-9e7a-a66dba01cb01,Namespace:calico-system,Attempt:0,}" Jan 20 00:48:40.797901 containerd[1602]: time="2026-01-20T00:48:40.797506727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffgch,Uid:946b9e08-0972-42be-947f-c9b1fe484382,Namespace:calico-system,Attempt:0,}" Jan 20 00:48:40.902906 containerd[1602]: time="2026-01-20T00:48:40.902792215Z" level=error msg="Failed to destroy network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:40.907482 containerd[1602]: time="2026-01-20T00:48:40.907341761Z" level=error msg="encountered an error cleaning up failed sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:40.907482 containerd[1602]: time="2026-01-20T00:48:40.907431046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-jcj86,Uid:303ab104-f18e-4de9-832d-feef41e44244,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:40.920378 kubelet[2786]: E0120 00:48:40.920197 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:40.920378 kubelet[2786]: E0120 00:48:40.920325 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" Jan 20 00:48:40.920378 kubelet[2786]: E0120 00:48:40.920360 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" Jan 20 00:48:40.920751 kubelet[2786]: E0120 00:48:40.920435 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ddd4777cd-jcj86_calico-apiserver(303ab104-f18e-4de9-832d-feef41e44244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ddd4777cd-jcj86_calico-apiserver(303ab104-f18e-4de9-832d-feef41e44244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:48:41.086233 kubelet[2786]: I0120 00:48:41.086043 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:48:41.161567 containerd[1602]: time="2026-01-20T00:48:41.158917975Z" level=info msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" Jan 20 00:48:41.163188 containerd[1602]: time="2026-01-20T00:48:41.163107231Z" level=info msg="Ensure that sandbox fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b in task-service has been cleanup successfully" Jan 20 00:48:41.266020 containerd[1602]: time="2026-01-20T00:48:41.265870948Z" level=error msg="Failed to destroy network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.268021 containerd[1602]: time="2026-01-20T00:48:41.267074047Z" level=error msg="Failed to destroy network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.273139 containerd[1602]: time="2026-01-20T00:48:41.272922219Z" level=error msg="encountered an error cleaning up failed sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.273139 containerd[1602]: time="2026-01-20T00:48:41.273041040Z" level=error msg="encountered an error cleaning up failed sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.273139 containerd[1602]: time="2026-01-20T00:48:41.273108747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57ccb4848f-ng25j,Uid:a8a518ab-c87e-46fd-bf3c-323ce2b95b5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.273466 containerd[1602]: time="2026-01-20T00:48:41.273059875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmzpv,Uid:9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.273856 kubelet[2786]: E0120 00:48:41.273646 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.273856 kubelet[2786]: E0120 00:48:41.273736 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vmzpv" Jan 20 00:48:41.273856 kubelet[2786]: E0120 00:48:41.273772 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vmzpv" Jan 20 00:48:41.274069 kubelet[2786]: E0120 00:48:41.273837 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vmzpv_calico-system(9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vmzpv_calico-system(9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:48:41.276476 kubelet[2786]: E0120 00:48:41.276310 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.276476 kubelet[2786]: E0120 00:48:41.276384 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57ccb4848f-ng25j" Jan 20 00:48:41.276476 kubelet[2786]: E0120 00:48:41.276416 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57ccb4848f-ng25j" Jan 20 00:48:41.277007 kubelet[2786]: E0120 00:48:41.276476 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57ccb4848f-ng25j_calico-system(a8a518ab-c87e-46fd-bf3c-323ce2b95b5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57ccb4848f-ng25j_calico-system(a8a518ab-c87e-46fd-bf3c-323ce2b95b5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57ccb4848f-ng25j" podUID="a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" Jan 20 00:48:41.331495 containerd[1602]: time="2026-01-20T00:48:41.331393470Z" level=error msg="Failed to destroy network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.333408 containerd[1602]: time="2026-01-20T00:48:41.332216382Z" level=error msg="encountered an error cleaning up failed sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.333408 containerd[1602]: time="2026-01-20T00:48:41.333223326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7748477466-xqhsk,Uid:7442950d-347c-4ccb-839f-bbcef74b512f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.336796 kubelet[2786]: E0120 00:48:41.333651 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.336796 kubelet[2786]: E0120 00:48:41.333736 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" Jan 20 00:48:41.336796 kubelet[2786]: E0120 00:48:41.333771 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" Jan 20 00:48:41.338376 kubelet[2786]: E0120 00:48:41.333866 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7748477466-xqhsk_calico-apiserver(7442950d-347c-4ccb-839f-bbcef74b512f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7748477466-xqhsk_calico-apiserver(7442950d-347c-4ccb-839f-bbcef74b512f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:48:41.385709 containerd[1602]: time="2026-01-20T00:48:41.385573106Z" level=error msg="Failed to destroy network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.389483 containerd[1602]: time="2026-01-20T00:48:41.386948876Z" level=error msg="encountered an error cleaning up failed sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.389483 containerd[1602]: time="2026-01-20T00:48:41.387097423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-f4nqr,Uid:7e16703d-6774-4dbd-a448-684d9c6307e4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.389728 kubelet[2786]: E0120 00:48:41.389488 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.389799 kubelet[2786]: E0120 00:48:41.389742 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" Jan 20 00:48:41.389799 kubelet[2786]: E0120 00:48:41.389783 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" Jan 20 00:48:41.389890 kubelet[2786]: E0120 00:48:41.389850 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ddd4777cd-f4nqr_calico-apiserver(7e16703d-6774-4dbd-a448-684d9c6307e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ddd4777cd-f4nqr_calico-apiserver(7e16703d-6774-4dbd-a448-684d9c6307e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:48:41.450354 containerd[1602]: time="2026-01-20T00:48:41.450149378Z" level=error msg="Failed to destroy network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.459609 containerd[1602]: time="2026-01-20T00:48:41.458753778Z" level=error msg="encountered an error cleaning up failed sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.459609 containerd[1602]: time="2026-01-20T00:48:41.458868431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffgch,Uid:946b9e08-0972-42be-947f-c9b1fe484382,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.459826 kubelet[2786]: E0120 00:48:41.459325 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.459826 kubelet[2786]: E0120 00:48:41.459470 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:41.459826 kubelet[2786]: E0120 00:48:41.459518 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffgch" Jan 20 00:48:41.460155 kubelet[2786]: E0120 00:48:41.459596 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:41.486443 containerd[1602]: time="2026-01-20T00:48:41.486338940Z" level=error msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" failed" error="failed to destroy network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.487146 kubelet[2786]: E0120 00:48:41.487094 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:48:41.488737 kubelet[2786]: E0120 00:48:41.488548 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b"} Jan 20 00:48:41.488737 kubelet[2786]: E0120 00:48:41.488647 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"303ab104-f18e-4de9-832d-feef41e44244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:41.488737 kubelet[2786]: E0120 00:48:41.488692 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"303ab104-f18e-4de9-832d-feef41e44244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:48:41.490170 containerd[1602]: time="2026-01-20T00:48:41.490119605Z" level=error msg="Failed to destroy network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.503633 containerd[1602]: time="2026-01-20T00:48:41.503568339Z" level=error msg="encountered an error cleaning up failed sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.504204 containerd[1602]: time="2026-01-20T00:48:41.504020121Z" level=error msg="Failed to destroy network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.508165 containerd[1602]: time="2026-01-20T00:48:41.508105983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h28hk,Uid:1f255c2e-3546-405d-a567-940c6cad406e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.508779 kubelet[2786]: E0120 00:48:41.508723 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.509414 kubelet[2786]: E0120 00:48:41.509375 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h28hk" Jan 20 00:48:41.509569 kubelet[2786]: E0120 00:48:41.509537 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h28hk" Jan 20 00:48:41.509748 kubelet[2786]: E0120 00:48:41.509706 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h28hk_kube-system(1f255c2e-3546-405d-a567-940c6cad406e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h28hk_kube-system(1f255c2e-3546-405d-a567-940c6cad406e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h28hk" podUID="1f255c2e-3546-405d-a567-940c6cad406e" Jan 20 00:48:41.513127 containerd[1602]: time="2026-01-20T00:48:41.513074118Z" level=error msg="encountered an error cleaning up failed sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.513365 containerd[1602]: time="2026-01-20T00:48:41.513322890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xqh9g,Uid:bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.513840 kubelet[2786]: E0120 00:48:41.513798 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.514066 kubelet[2786]: E0120 00:48:41.514035 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xqh9g" Jan 20 00:48:41.514216 kubelet[2786]: E0120 00:48:41.514183 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xqh9g" Jan 20 00:48:41.514424 kubelet[2786]: E0120 00:48:41.514382 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xqh9g_kube-system(bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xqh9g_kube-system(bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xqh9g" podUID="bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5" Jan 20 00:48:41.541647 containerd[1602]: time="2026-01-20T00:48:41.538713835Z" level=error msg="Failed to destroy network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.543114 containerd[1602]: time="2026-01-20T00:48:41.543012630Z" level=error msg="encountered an error cleaning up failed sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.543246 containerd[1602]: time="2026-01-20T00:48:41.543114609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdf5b57-92x4l,Uid:c6f27543-10cf-4ae1-9e7a-a66dba01cb01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.547053 kubelet[2786]: E0120 00:48:41.546874 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:41.547225 kubelet[2786]: E0120 00:48:41.547079 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" Jan 20 00:48:41.547225 kubelet[2786]: E0120 00:48:41.547120 2786 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" Jan 20 00:48:41.547225 kubelet[2786]: E0120 00:48:41.547187 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55cdf5b57-92x4l_calico-system(c6f27543-10cf-4ae1-9e7a-a66dba01cb01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55cdf5b57-92x4l_calico-system(c6f27543-10cf-4ae1-9e7a-a66dba01cb01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:48:41.744568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e-shm.mount: Deactivated successfully. Jan 20 00:48:41.744864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f-shm.mount: Deactivated successfully. Jan 20 00:48:41.745147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75-shm.mount: Deactivated successfully. Jan 20 00:48:41.745412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb-shm.mount: Deactivated successfully. Jan 20 00:48:41.745616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5-shm.mount: Deactivated successfully. Jan 20 00:48:41.745831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171-shm.mount: Deactivated successfully. Jan 20 00:48:41.746172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e-shm.mount: Deactivated successfully. Jan 20 00:48:41.746442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b-shm.mount: Deactivated successfully. Jan 20 00:48:42.099862 kubelet[2786]: I0120 00:48:42.097207 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:48:42.108479 containerd[1602]: time="2026-01-20T00:48:42.105009699Z" level=info msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" Jan 20 00:48:42.108479 containerd[1602]: time="2026-01-20T00:48:42.105307032Z" level=info msg="Ensure that sandbox b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f in task-service has been cleanup successfully" Jan 20 00:48:42.120726 kubelet[2786]: I0120 00:48:42.117724 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:48:42.135565 containerd[1602]: time="2026-01-20T00:48:42.125446590Z" level=info msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" Jan 20 00:48:42.135565 containerd[1602]: time="2026-01-20T00:48:42.125660356Z" level=info msg="Ensure that sandbox 82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a in task-service has been cleanup successfully" Jan 20 00:48:42.149703 kubelet[2786]: I0120 00:48:42.148767 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:48:42.160011 containerd[1602]: time="2026-01-20T00:48:42.159887717Z" level=info msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" Jan 20 00:48:42.162426 containerd[1602]: time="2026-01-20T00:48:42.160244852Z" level=info msg="Ensure that sandbox bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75 in task-service has been cleanup successfully" Jan 20 00:48:42.172398 kubelet[2786]: I0120 00:48:42.171184 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:48:42.172827 containerd[1602]: time="2026-01-20T00:48:42.172786634Z" level=info msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" Jan 20 00:48:42.173331 containerd[1602]: time="2026-01-20T00:48:42.173254615Z" level=info msg="Ensure that sandbox 2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb in task-service has been cleanup successfully" Jan 20 00:48:42.182755 kubelet[2786]: I0120 00:48:42.182662 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:48:42.209426 containerd[1602]: time="2026-01-20T00:48:42.205000239Z" level=info msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" Jan 20 00:48:42.209426 containerd[1602]: time="2026-01-20T00:48:42.205306851Z" level=info msg="Ensure that sandbox 6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5 in task-service has been cleanup successfully" Jan 20 00:48:42.221384 kubelet[2786]: I0120 00:48:42.220754 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:48:42.223043 containerd[1602]: time="2026-01-20T00:48:42.222942585Z" level=info msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" Jan 20 00:48:42.223783 containerd[1602]: time="2026-01-20T00:48:42.223433419Z" level=info msg="Ensure that sandbox f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e in task-service has been cleanup successfully" Jan 20 00:48:42.242091 kubelet[2786]: I0120 00:48:42.239931 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:48:42.245820 containerd[1602]: time="2026-01-20T00:48:42.245683337Z" level=info msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" Jan 20 00:48:42.246107 containerd[1602]: time="2026-01-20T00:48:42.246049659Z" level=info msg="Ensure that sandbox 9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171 in task-service has been cleanup successfully" Jan 20 00:48:42.266732 kubelet[2786]: I0120 00:48:42.265590 2786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:48:42.268902 containerd[1602]: time="2026-01-20T00:48:42.268762752Z" level=info msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" Jan 20 00:48:42.316144 containerd[1602]: time="2026-01-20T00:48:42.315948930Z" level=info msg="Ensure that sandbox fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e in task-service has been cleanup successfully" Jan 20 00:48:42.483357 containerd[1602]: time="2026-01-20T00:48:42.483257332Z" level=error msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" failed" error="failed to destroy network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.484652 kubelet[2786]: E0120 00:48:42.484471 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:48:42.484991 kubelet[2786]: E0120 00:48:42.484855 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f"} Jan 20 00:48:42.485205 kubelet[2786]: E0120 00:48:42.485131 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6f27543-10cf-4ae1-9e7a-a66dba01cb01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.489833 kubelet[2786]: E0120 00:48:42.488873 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6f27543-10cf-4ae1-9e7a-a66dba01cb01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:48:42.512099 containerd[1602]: time="2026-01-20T00:48:42.512031338Z" level=error msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" failed" error="failed to destroy network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.513207 kubelet[2786]: E0120 00:48:42.512766 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:48:42.513207 kubelet[2786]: E0120 00:48:42.512837 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e"} Jan 20 00:48:42.513207 kubelet[2786]: E0120 00:48:42.512897 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.513207 kubelet[2786]: E0120 00:48:42.512934 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:48:42.555055 containerd[1602]: time="2026-01-20T00:48:42.553797369Z" level=error msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" failed" error="failed to destroy network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.556071 kubelet[2786]: E0120 00:48:42.555716 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:48:42.556750 containerd[1602]: time="2026-01-20T00:48:42.555841693Z" level=error msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" failed" error="failed to destroy network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.556859 kubelet[2786]: E0120 00:48:42.556564 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:48:42.556859 kubelet[2786]: E0120 00:48:42.556616 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75"} Jan 20 00:48:42.556859 kubelet[2786]: E0120 00:48:42.556662 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f255c2e-3546-405d-a567-940c6cad406e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.556859 kubelet[2786]: E0120 00:48:42.556701 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f255c2e-3546-405d-a567-940c6cad406e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h28hk" podUID="1f255c2e-3546-405d-a567-940c6cad406e" Jan 20 00:48:42.558628 kubelet[2786]: E0120 00:48:42.557047 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5"} Jan 20 00:48:42.558628 kubelet[2786]: E0120 00:48:42.558338 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.558628 kubelet[2786]: E0120 00:48:42.558383 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xqh9g" podUID="bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5" Jan 20 00:48:42.560760 containerd[1602]: time="2026-01-20T00:48:42.559863888Z" level=error msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" failed" error="failed to destroy network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.560888 kubelet[2786]: E0120 00:48:42.560549 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:48:42.560888 kubelet[2786]: E0120 00:48:42.560601 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a"} Jan 20 00:48:42.560888 kubelet[2786]: E0120 00:48:42.560645 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"946b9e08-0972-42be-947f-c9b1fe484382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.560888 kubelet[2786]: E0120 00:48:42.560676 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"946b9e08-0972-42be-947f-c9b1fe484382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:42.591328 containerd[1602]: time="2026-01-20T00:48:42.591232284Z" level=error msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" failed" error="failed to destroy network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.591824 kubelet[2786]: E0120 00:48:42.591764 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:48:42.592124 kubelet[2786]: E0120 00:48:42.592086 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb"} Jan 20 00:48:42.592769 kubelet[2786]: E0120 00:48:42.592261 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e16703d-6774-4dbd-a448-684d9c6307e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.593790 kubelet[2786]: E0120 00:48:42.593749 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e16703d-6774-4dbd-a448-684d9c6307e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:48:42.597294 containerd[1602]: time="2026-01-20T00:48:42.596663228Z" level=error msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" failed" error="failed to destroy network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.597431 kubelet[2786]: E0120 00:48:42.597016 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:48:42.597431 kubelet[2786]: E0120 00:48:42.597072 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171"} Jan 20 00:48:42.597431 kubelet[2786]: E0120 00:48:42.597119 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.597431 kubelet[2786]: E0120 00:48:42.597155 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57ccb4848f-ng25j" podUID="a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" Jan 20 00:48:42.604190 containerd[1602]: time="2026-01-20T00:48:42.603186417Z" level=error msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" failed" error="failed to destroy network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:42.605187 kubelet[2786]: E0120 00:48:42.605127 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:48:42.605486 kubelet[2786]: E0120 00:48:42.605353 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e"} Jan 20 00:48:42.605486 kubelet[2786]: E0120 00:48:42.605439 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7442950d-347c-4ccb-839f-bbcef74b512f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:42.605486 kubelet[2786]: E0120 00:48:42.605475 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7442950d-347c-4ccb-839f-bbcef74b512f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:48:53.781942 containerd[1602]: time="2026-01-20T00:48:53.781870461Z" level=info msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" Jan 20 00:48:53.893779 containerd[1602]: time="2026-01-20T00:48:53.893578545Z" level=error msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" failed" error="failed to destroy network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:53.897453 kubelet[2786]: E0120 00:48:53.894048 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:48:53.897453 kubelet[2786]: E0120 00:48:53.894134 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a"} Jan 20 00:48:53.897453 kubelet[2786]: E0120 00:48:53.894188 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"946b9e08-0972-42be-947f-c9b1fe484382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:53.897453 kubelet[2786]: E0120 00:48:53.894221 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"946b9e08-0972-42be-947f-c9b1fe484382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:48:54.781555 containerd[1602]: time="2026-01-20T00:48:54.780656324Z" level=info msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" Jan 20 00:48:54.781555 containerd[1602]: time="2026-01-20T00:48:54.780885493Z" level=info msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" Jan 20 00:48:54.782721 containerd[1602]: time="2026-01-20T00:48:54.782479661Z" level=info msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" Jan 20 00:48:54.785463 containerd[1602]: time="2026-01-20T00:48:54.782928517Z" level=info msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" Jan 20 00:48:54.786864 containerd[1602]: time="2026-01-20T00:48:54.786830952Z" level=info msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" Jan 20 00:48:55.014029 containerd[1602]: time="2026-01-20T00:48:55.013878676Z" level=error msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" failed" error="failed to destroy network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:55.015733 kubelet[2786]: E0120 00:48:55.015517 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:48:55.015733 kubelet[2786]: E0120 00:48:55.015603 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f"} Jan 20 00:48:55.015733 kubelet[2786]: E0120 00:48:55.015657 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6f27543-10cf-4ae1-9e7a-a66dba01cb01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:55.015733 kubelet[2786]: E0120 00:48:55.015691 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6f27543-10cf-4ae1-9e7a-a66dba01cb01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:48:55.019798 containerd[1602]: time="2026-01-20T00:48:55.019703062Z" level=error msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" failed" error="failed to destroy network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:55.020378 kubelet[2786]: E0120 00:48:55.020332 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:48:55.020662 kubelet[2786]: E0120 00:48:55.020538 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75"} Jan 20 00:48:55.020662 kubelet[2786]: E0120 00:48:55.020592 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f255c2e-3546-405d-a567-940c6cad406e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:55.020662 kubelet[2786]: E0120 00:48:55.020625 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f255c2e-3546-405d-a567-940c6cad406e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h28hk" podUID="1f255c2e-3546-405d-a567-940c6cad406e" Jan 20 00:48:55.057429 containerd[1602]: time="2026-01-20T00:48:55.057238406Z" level=error msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" failed" error="failed to destroy network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:55.062692 kubelet[2786]: E0120 00:48:55.060064 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:48:55.062692 kubelet[2786]: E0120 00:48:55.060143 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e"} Jan 20 00:48:55.062692 kubelet[2786]: E0120 00:48:55.060193 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:55.062692 kubelet[2786]: E0120 00:48:55.060231 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:48:55.063188 containerd[1602]: time="2026-01-20T00:48:55.061595953Z" level=error msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" failed" error="failed to destroy network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:55.063306 kubelet[2786]: E0120 00:48:55.061792 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:48:55.063306 kubelet[2786]: E0120 00:48:55.061869 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb"} Jan 20 00:48:55.063306 kubelet[2786]: E0120 00:48:55.061911 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e16703d-6774-4dbd-a448-684d9c6307e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:55.063306 kubelet[2786]: E0120 00:48:55.061942 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e16703d-6774-4dbd-a448-684d9c6307e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:48:55.079898 containerd[1602]: time="2026-01-20T00:48:55.078507042Z" level=error msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" failed" error="failed to destroy network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:55.080136 kubelet[2786]: E0120 00:48:55.079909 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:48:55.080136 kubelet[2786]: E0120 00:48:55.080035 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e"} Jan 20 00:48:55.080136 kubelet[2786]: E0120 00:48:55.080088 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7442950d-347c-4ccb-839f-bbcef74b512f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:55.080136 kubelet[2786]: E0120 00:48:55.080119 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7442950d-347c-4ccb-839f-bbcef74b512f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:48:55.793603 containerd[1602]: time="2026-01-20T00:48:55.792836950Z" level=info msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" Jan 20 00:48:56.368261 containerd[1602]: time="2026-01-20T00:48:56.367333005Z" level=error msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" failed" error="failed to destroy network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:56.382857 kubelet[2786]: E0120 00:48:56.382190 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:48:56.383901 kubelet[2786]: E0120 00:48:56.383211 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b"} Jan 20 00:48:56.383901 kubelet[2786]: E0120 00:48:56.383843 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"303ab104-f18e-4de9-832d-feef41e44244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:56.386668 kubelet[2786]: E0120 00:48:56.384038 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"303ab104-f18e-4de9-832d-feef41e44244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:48:56.784834 containerd[1602]: time="2026-01-20T00:48:56.784712091Z" level=info msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" Jan 20 00:48:56.786386 containerd[1602]: time="2026-01-20T00:48:56.784712119Z" level=info msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" Jan 20 00:48:56.899570 containerd[1602]: time="2026-01-20T00:48:56.899503447Z" level=error msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" failed" error="failed to destroy network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:56.903700 kubelet[2786]: E0120 00:48:56.903608 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:48:56.904079 kubelet[2786]: E0120 00:48:56.903773 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171"} Jan 20 00:48:56.904079 kubelet[2786]: E0120 00:48:56.903834 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:56.904079 kubelet[2786]: E0120 00:48:56.903877 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57ccb4848f-ng25j" podUID="a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" Jan 20 00:48:56.910364 containerd[1602]: time="2026-01-20T00:48:56.908166966Z" level=error msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" failed" error="failed to destroy network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:48:56.910489 kubelet[2786]: E0120 00:48:56.908535 2786 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:48:56.910489 kubelet[2786]: E0120 00:48:56.908592 2786 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5"} Jan 20 00:48:56.910489 kubelet[2786]: E0120 00:48:56.908643 2786 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:48:56.910489 kubelet[2786]: E0120 00:48:56.908680 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xqh9g" podUID="bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5" Jan 20 00:49:00.369630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108366075.mount: Deactivated successfully. Jan 20 00:49:00.487730 containerd[1602]: time="2026-01-20T00:49:00.487604157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:00.488636 containerd[1602]: time="2026-01-20T00:49:00.488552079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 00:49:00.497079 containerd[1602]: time="2026-01-20T00:49:00.496859550Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:00.500489 containerd[1602]: time="2026-01-20T00:49:00.500398676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:00.501480 containerd[1602]: time="2026-01-20T00:49:00.501395811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 20.439932576s" Jan 20 00:49:00.501480 containerd[1602]: time="2026-01-20T00:49:00.501462515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 00:49:00.549386 containerd[1602]: time="2026-01-20T00:49:00.549064398Z" level=info msg="CreateContainer within sandbox \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:49:00.699253 containerd[1602]: time="2026-01-20T00:49:00.697726171Z" level=info msg="CreateContainer within sandbox \"66b9f3e85ca2f19325a8270ae4047fc9b95758fe04b324c27a507e27743120fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"44f598f9fd848b40c2a6ead5202feb3fe35e246781a68b02e26a2c8c871af5e4\"" Jan 20 00:49:00.702595 containerd[1602]: time="2026-01-20T00:49:00.701113788Z" level=info msg="StartContainer for \"44f598f9fd848b40c2a6ead5202feb3fe35e246781a68b02e26a2c8c871af5e4\"" Jan 20 00:49:00.979544 containerd[1602]: time="2026-01-20T00:49:00.976487273Z" level=info msg="StartContainer for \"44f598f9fd848b40c2a6ead5202feb3fe35e246781a68b02e26a2c8c871af5e4\" returns successfully" Jan 20 00:49:01.015842 kubelet[2786]: E0120 00:49:01.015595 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:01.115472 kubelet[2786]: I0120 00:49:01.115390 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rmjqr" podStartSLOduration=2.1742329 podStartE2EDuration="38.115366166s" podCreationTimestamp="2026-01-20 00:48:23 +0000 UTC" firstStartedPulling="2026-01-20 00:48:24.563396654 +0000 UTC m=+35.171187355" lastFinishedPulling="2026-01-20 00:49:00.504529931 +0000 UTC m=+71.112320621" observedRunningTime="2026-01-20 00:49:01.108156928 +0000 UTC m=+71.715947658" watchObservedRunningTime="2026-01-20 00:49:01.115366166 +0000 UTC m=+71.723156866" Jan 20 00:49:01.351918 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:49:01.361063 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:49:01.860016 containerd[1602]: time="2026-01-20T00:49:01.859335732Z" level=info msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" Jan 20 00:49:02.023391 kubelet[2786]: E0120 00:49:02.022029 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.405 [INFO][4333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.405 [INFO][4333] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" iface="eth0" netns="/var/run/netns/cni-8cf8afad-af2a-2231-ef1e-5884b3138fca" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.410 [INFO][4333] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" iface="eth0" netns="/var/run/netns/cni-8cf8afad-af2a-2231-ef1e-5884b3138fca" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.414 [INFO][4333] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" iface="eth0" netns="/var/run/netns/cni-8cf8afad-af2a-2231-ef1e-5884b3138fca" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.414 [INFO][4333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.414 [INFO][4333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.791 [INFO][4366] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.793 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.794 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.832 [WARNING][4366] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.832 [INFO][4366] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.837 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:02.853807 containerd[1602]: 2026-01-20 00:49:02.847 [INFO][4333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:02.865171 systemd[1]: run-netns-cni\x2d8cf8afad\x2daf2a\x2d2231\x2def1e\x2d5884b3138fca.mount: Deactivated successfully. Jan 20 00:49:02.873476 containerd[1602]: time="2026-01-20T00:49:02.870484161Z" level=info msg="TearDown network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" successfully" Jan 20 00:49:02.873476 containerd[1602]: time="2026-01-20T00:49:02.870530898Z" level=info msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" returns successfully" Jan 20 00:49:02.963694 kubelet[2786]: I0120 00:49:02.963308 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvsw\" (UniqueName: \"kubernetes.io/projected/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-kube-api-access-8fvsw\") pod \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" (UID: \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\") " Jan 20 00:49:02.963694 kubelet[2786]: I0120 00:49:02.963484 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-backend-key-pair\") pod \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" (UID: \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\") " Jan 20 00:49:02.963694 kubelet[2786]: I0120 00:49:02.963544 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-ca-bundle\") pod \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\" (UID: \"a8a518ab-c87e-46fd-bf3c-323ce2b95b5f\") " Jan 20 00:49:02.964571 kubelet[2786]: I0120 00:49:02.964382 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" (UID: "a8a518ab-c87e-46fd-bf3c-323ce2b95b5f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:49:02.992132 systemd[1]: var-lib-kubelet-pods-a8a518ab\x2dc87e\x2d46fd\x2dbf3c\x2d323ce2b95b5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fvsw.mount: Deactivated successfully. Jan 20 00:49:02.996218 kubelet[2786]: I0120 00:49:02.996099 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-kube-api-access-8fvsw" (OuterVolumeSpecName: "kube-api-access-8fvsw") pod "a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" (UID: "a8a518ab-c87e-46fd-bf3c-323ce2b95b5f"). InnerVolumeSpecName "kube-api-access-8fvsw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:49:03.002032 kubelet[2786]: I0120 00:49:03.001559 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" (UID: "a8a518ab-c87e-46fd-bf3c-323ce2b95b5f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:49:03.010808 systemd[1]: var-lib-kubelet-pods-a8a518ab\x2dc87e\x2d46fd\x2dbf3c\x2d323ce2b95b5f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 00:49:03.076984 kubelet[2786]: I0120 00:49:03.076827 2786 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 00:49:03.076984 kubelet[2786]: I0120 00:49:03.076906 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fvsw\" (UniqueName: \"kubernetes.io/projected/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-kube-api-access-8fvsw\") on node \"localhost\" DevicePath \"\"" Jan 20 00:49:03.076984 kubelet[2786]: I0120 00:49:03.076937 2786 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 00:49:03.381212 kubelet[2786]: I0120 00:49:03.380764 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06b8bae0-3466-476f-9e43-40816e9ed87d-whisker-backend-key-pair\") pod \"whisker-6b7b664c8f-84jkd\" (UID: \"06b8bae0-3466-476f-9e43-40816e9ed87d\") " pod="calico-system/whisker-6b7b664c8f-84jkd" Jan 20 00:49:03.381212 kubelet[2786]: I0120 00:49:03.380869 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b8bae0-3466-476f-9e43-40816e9ed87d-whisker-ca-bundle\") pod \"whisker-6b7b664c8f-84jkd\" (UID: \"06b8bae0-3466-476f-9e43-40816e9ed87d\") " pod="calico-system/whisker-6b7b664c8f-84jkd" Jan 20 00:49:03.381212 kubelet[2786]: I0120 00:49:03.380908 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6sh\" (UniqueName: \"kubernetes.io/projected/06b8bae0-3466-476f-9e43-40816e9ed87d-kube-api-access-sx6sh\") pod \"whisker-6b7b664c8f-84jkd\" (UID: \"06b8bae0-3466-476f-9e43-40816e9ed87d\") " pod="calico-system/whisker-6b7b664c8f-84jkd" Jan 20 00:49:03.623492 containerd[1602]: time="2026-01-20T00:49:03.622034268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b7b664c8f-84jkd,Uid:06b8bae0-3466-476f-9e43-40816e9ed87d,Namespace:calico-system,Attempt:0,}" Jan 20 00:49:03.784418 kubelet[2786]: E0120 00:49:03.783579 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:03.791368 kubelet[2786]: I0120 00:49:03.790930 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a518ab-c87e-46fd-bf3c-323ce2b95b5f" path="/var/lib/kubelet/pods/a8a518ab-c87e-46fd-bf3c-323ce2b95b5f/volumes" Jan 20 00:49:04.297305 systemd-networkd[1264]: cali7542577a2dc: Link UP Jan 20 00:49:04.297865 systemd-networkd[1264]: cali7542577a2dc: Gained carrier Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:03.830 [INFO][4388] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:03.892 [INFO][4388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6b7b664c8f--84jkd-eth0 whisker-6b7b664c8f- calico-system 06b8bae0-3466-476f-9e43-40816e9ed87d 1066 0 2026-01-20 00:49:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b7b664c8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6b7b664c8f-84jkd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7542577a2dc [] [] }} ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:03.893 [INFO][4388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.010 [INFO][4403] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" HandleID="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Workload="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.011 [INFO][4403] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" HandleID="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Workload="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f0190), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6b7b664c8f-84jkd", "timestamp":"2026-01-20 00:49:04.010792196 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.011 [INFO][4403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.011 [INFO][4403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.011 [INFO][4403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.047 [INFO][4403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.091 [INFO][4403] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.142 [INFO][4403] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.154 [INFO][4403] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.165 [INFO][4403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.165 [INFO][4403] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.179 [INFO][4403] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395 Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.216 [INFO][4403] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.244 [INFO][4403] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.244 [INFO][4403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" host="localhost" Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.244 [INFO][4403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:04.376258 containerd[1602]: 2026-01-20 00:49:04.244 [INFO][4403] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" HandleID="k8s-pod-network.4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Workload="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.380350 containerd[1602]: 2026-01-20 00:49:04.249 [INFO][4388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b7b664c8f--84jkd-eth0", GenerateName:"whisker-6b7b664c8f-", Namespace:"calico-system", SelfLink:"", UID:"06b8bae0-3466-476f-9e43-40816e9ed87d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b7b664c8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6b7b664c8f-84jkd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7542577a2dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:04.380350 containerd[1602]: 2026-01-20 00:49:04.250 [INFO][4388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.380350 containerd[1602]: 2026-01-20 00:49:04.250 [INFO][4388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7542577a2dc ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.380350 containerd[1602]: 2026-01-20 00:49:04.297 [INFO][4388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.380350 containerd[1602]: 2026-01-20 00:49:04.302 [INFO][4388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b7b664c8f--84jkd-eth0", GenerateName:"whisker-6b7b664c8f-", Namespace:"calico-system", SelfLink:"", UID:"06b8bae0-3466-476f-9e43-40816e9ed87d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b7b664c8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395", Pod:"whisker-6b7b664c8f-84jkd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7542577a2dc", MAC:"02:37:88:5d:c4:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:04.380350 containerd[1602]: 2026-01-20 00:49:04.363 [INFO][4388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395" Namespace="calico-system" Pod="whisker-6b7b664c8f-84jkd" WorkloadEndpoint="localhost-k8s-whisker--6b7b664c8f--84jkd-eth0" Jan 20 00:49:04.591520 containerd[1602]: time="2026-01-20T00:49:04.585671848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:04.591520 containerd[1602]: time="2026-01-20T00:49:04.585810677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:04.591520 containerd[1602]: time="2026-01-20T00:49:04.585839791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:04.591520 containerd[1602]: time="2026-01-20T00:49:04.586042439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:04.720448 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:04.771058 kernel: bpftool[4585]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 00:49:04.796026 containerd[1602]: time="2026-01-20T00:49:04.795795450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b7b664c8f-84jkd,Uid:06b8bae0-3466-476f-9e43-40816e9ed87d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e8045f4c1f4436f423abf16ff044cddee0bfc411f4ca2c76d14fcc82c94c395\"" Jan 20 00:49:04.802845 containerd[1602]: time="2026-01-20T00:49:04.802693134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:49:04.920173 containerd[1602]: time="2026-01-20T00:49:04.919681712Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:04.980606 containerd[1602]: time="2026-01-20T00:49:04.924675184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:49:04.980606 containerd[1602]: time="2026-01-20T00:49:04.927606885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:49:04.981549 kubelet[2786]: E0120 00:49:04.981111 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:49:04.981549 kubelet[2786]: E0120 00:49:04.981180 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:49:04.987797 kubelet[2786]: E0120 00:49:04.984323 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd15b3b8928842729e5a367f173cdad6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:04.990451 containerd[1602]: time="2026-01-20T00:49:04.989648329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:49:05.076684 containerd[1602]: time="2026-01-20T00:49:05.076196835Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:05.096829 containerd[1602]: time="2026-01-20T00:49:05.093396535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:49:05.096829 containerd[1602]: time="2026-01-20T00:49:05.093530815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:49:05.097207 kubelet[2786]: E0120 00:49:05.093764 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:49:05.097207 kubelet[2786]: E0120 00:49:05.093832 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:49:05.097380 kubelet[2786]: E0120 00:49:05.094078 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:05.097380 kubelet[2786]: E0120 00:49:05.095420 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:49:05.527036 systemd-networkd[1264]: vxlan.calico: Link UP Jan 20 00:49:05.527050 systemd-networkd[1264]: vxlan.calico: Gained carrier Jan 20 00:49:05.763252 systemd-networkd[1264]: cali7542577a2dc: Gained IPv6LL Jan 20 00:49:06.060157 kubelet[2786]: E0120 00:49:06.059171 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:49:06.779866 kubelet[2786]: E0120 00:49:06.777511 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:06.780468 containerd[1602]: time="2026-01-20T00:49:06.780403960Z" level=info msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" Jan 20 00:49:06.924127 systemd-networkd[1264]: vxlan.calico: Gained IPv6LL Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:06.921 [INFO][4680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:06.922 [INFO][4680] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" iface="eth0" netns="/var/run/netns/cni-509bb8c2-39ce-6c68-49ed-12bb885a78a2" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:06.924 [INFO][4680] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" iface="eth0" netns="/var/run/netns/cni-509bb8c2-39ce-6c68-49ed-12bb885a78a2" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:06.925 [INFO][4680] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" iface="eth0" netns="/var/run/netns/cni-509bb8c2-39ce-6c68-49ed-12bb885a78a2" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:06.925 [INFO][4680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:06.925 [INFO][4680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.016 [INFO][4688] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.016 [INFO][4688] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.016 [INFO][4688] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.033 [WARNING][4688] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.033 [INFO][4688] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.041 [INFO][4688] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:07.057536 containerd[1602]: 2026-01-20 00:49:07.046 [INFO][4680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:07.065892 containerd[1602]: time="2026-01-20T00:49:07.062465280Z" level=info msg="TearDown network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" successfully" Jan 20 00:49:07.065892 containerd[1602]: time="2026-01-20T00:49:07.062506607Z" level=info msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" returns successfully" Jan 20 00:49:07.069722 containerd[1602]: time="2026-01-20T00:49:07.067650342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffgch,Uid:946b9e08-0972-42be-947f-c9b1fe484382,Namespace:calico-system,Attempt:1,}" Jan 20 00:49:07.070804 systemd[1]: run-netns-cni\x2d509bb8c2\x2d39ce\x2d6c68\x2d49ed\x2d12bb885a78a2.mount: Deactivated successfully. Jan 20 00:49:07.683672 systemd-networkd[1264]: calidb7ab818266: Link UP Jan 20 00:49:07.684174 systemd-networkd[1264]: calidb7ab818266: Gained carrier Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.287 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ffgch-eth0 csi-node-driver- calico-system 946b9e08-0972-42be-947f-c9b1fe484382 1095 0 2026-01-20 00:48:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ffgch eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidb7ab818266 [] [] }} ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.287 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.445 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" HandleID="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.445 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" HandleID="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033f0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ffgch", "timestamp":"2026-01-20 00:49:07.445346016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.445 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.445 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.445 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.474 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.499 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.547 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.580 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.591 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.591 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.600 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.622 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.658 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.658 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" host="localhost" Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.658 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:07.750160 containerd[1602]: 2026-01-20 00:49:07.658 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" HandleID="k8s-pod-network.2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.759096 containerd[1602]: 2026-01-20 00:49:07.667 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ffgch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"946b9e08-0972-42be-947f-c9b1fe484382", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ffgch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb7ab818266", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:07.759096 containerd[1602]: 2026-01-20 00:49:07.671 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.759096 containerd[1602]: 2026-01-20 00:49:07.671 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb7ab818266 ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.759096 containerd[1602]: 2026-01-20 00:49:07.681 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.759096 containerd[1602]: 2026-01-20 00:49:07.686 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ffgch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"946b9e08-0972-42be-947f-c9b1fe484382", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d", Pod:"csi-node-driver-ffgch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb7ab818266", MAC:"76:f9:2d:ca:7b:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:07.759096 containerd[1602]: 2026-01-20 00:49:07.731 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d" Namespace="calico-system" Pod="csi-node-driver-ffgch" WorkloadEndpoint="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:07.787561 containerd[1602]: time="2026-01-20T00:49:07.782750513Z" level=info msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" Jan 20 00:49:07.858551 containerd[1602]: time="2026-01-20T00:49:07.857200319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:07.858551 containerd[1602]: time="2026-01-20T00:49:07.857310142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:07.858551 containerd[1602]: time="2026-01-20T00:49:07.857332474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:07.858551 containerd[1602]: time="2026-01-20T00:49:07.857722171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:07.971755 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:08.045399 containerd[1602]: time="2026-01-20T00:49:08.045141554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffgch,Uid:946b9e08-0972-42be-947f-c9b1fe484382,Namespace:calico-system,Attempt:1,} returns sandbox id \"2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d\"" Jan 20 00:49:08.064554 containerd[1602]: time="2026-01-20T00:49:08.063879654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:49:08.158368 containerd[1602]: time="2026-01-20T00:49:08.157772968Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:08.162387 containerd[1602]: time="2026-01-20T00:49:08.162188633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:49:08.163811 containerd[1602]: time="2026-01-20T00:49:08.162729781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:49:08.164236 kubelet[2786]: E0120 00:49:08.164107 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:49:08.164236 kubelet[2786]: E0120 00:49:08.164181 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:49:08.165234 kubelet[2786]: E0120 00:49:08.164852 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:08.178712 containerd[1602]: time="2026-01-20T00:49:08.176853424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.019 [INFO][4743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.019 [INFO][4743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" iface="eth0" netns="/var/run/netns/cni-019d2565-4ff8-f537-59ba-1717a772bbb3" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.021 [INFO][4743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" iface="eth0" netns="/var/run/netns/cni-019d2565-4ff8-f537-59ba-1717a772bbb3" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.024 [INFO][4743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" iface="eth0" netns="/var/run/netns/cni-019d2565-4ff8-f537-59ba-1717a772bbb3" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.024 [INFO][4743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.024 [INFO][4743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.131 [INFO][4788] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.133 [INFO][4788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.133 [INFO][4788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.174 [WARNING][4788] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.174 [INFO][4788] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.190 [INFO][4788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:08.223829 containerd[1602]: 2026-01-20 00:49:08.212 [INFO][4743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:08.245358 containerd[1602]: time="2026-01-20T00:49:08.234446690Z" level=info msg="TearDown network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" successfully" Jan 20 00:49:08.245358 containerd[1602]: time="2026-01-20T00:49:08.234494499Z" level=info msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" returns successfully" Jan 20 00:49:08.245358 containerd[1602]: time="2026-01-20T00:49:08.240869576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h28hk,Uid:1f255c2e-3546-405d-a567-940c6cad406e,Namespace:kube-system,Attempt:1,}" Jan 20 00:49:08.239114 systemd[1]: run-netns-cni\x2d019d2565\x2d4ff8\x2df537\x2d59ba\x2d1717a772bbb3.mount: Deactivated successfully. Jan 20 00:49:08.249918 kubelet[2786]: E0120 00:49:08.235202 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:08.305156 containerd[1602]: time="2026-01-20T00:49:08.305063352Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:08.309688 containerd[1602]: time="2026-01-20T00:49:08.309593051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:49:08.312461 containerd[1602]: time="2026-01-20T00:49:08.309847084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:49:08.312662 kubelet[2786]: E0120 00:49:08.310111 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:49:08.312662 kubelet[2786]: E0120 00:49:08.310177 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:49:08.312662 kubelet[2786]: E0120 00:49:08.310398 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:08.312662 kubelet[2786]: E0120 00:49:08.311671 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:49:08.782633 containerd[1602]: time="2026-01-20T00:49:08.780439375Z" level=info msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" Jan 20 00:49:08.784491 kubelet[2786]: E0120 00:49:08.780854 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:08.794409 containerd[1602]: time="2026-01-20T00:49:08.793697835Z" level=info msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" Jan 20 00:49:08.865812 systemd-networkd[1264]: calid78bf62c3b6: Link UP Jan 20 00:49:08.880532 systemd-networkd[1264]: calid78bf62c3b6: Gained carrier Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.452 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--h28hk-eth0 coredns-668d6bf9bc- kube-system 1f255c2e-3546-405d-a567-940c6cad406e 1102 0 2026-01-20 00:47:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-h28hk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid78bf62c3b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.452 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.589 [INFO][4811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" HandleID="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.589 [INFO][4811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" HandleID="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a2130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-h28hk", "timestamp":"2026-01-20 00:49:08.589320017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.589 [INFO][4811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.589 [INFO][4811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.589 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.630 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.651 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.673 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.684 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.697 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.697 [INFO][4811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.711 [INFO][4811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.731 [INFO][4811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.772 [INFO][4811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.772 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" host="localhost" Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.772 [INFO][4811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:08.984524 containerd[1602]: 2026-01-20 00:49:08.773 [INFO][4811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" HandleID="k8s-pod-network.f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.985521 containerd[1602]: 2026-01-20 00:49:08.810 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h28hk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f255c2e-3546-405d-a567-940c6cad406e", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-h28hk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid78bf62c3b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:08.985521 containerd[1602]: 2026-01-20 00:49:08.815 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.985521 containerd[1602]: 2026-01-20 00:49:08.816 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid78bf62c3b6 ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.985521 containerd[1602]: 2026-01-20 00:49:08.866 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:08.985521 containerd[1602]: 2026-01-20 00:49:08.880 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h28hk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f255c2e-3546-405d-a567-940c6cad406e", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b", Pod:"coredns-668d6bf9bc-h28hk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid78bf62c3b6", MAC:"6a:67:af:ff:a2:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:08.985521 containerd[1602]: 2026-01-20 00:49:08.968 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b" Namespace="kube-system" Pod="coredns-668d6bf9bc-h28hk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:09.126872 kubelet[2786]: E0120 00:49:09.125739 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:49:09.160689 systemd-networkd[1264]: calidb7ab818266: Gained IPv6LL Jan 20 00:49:09.245439 containerd[1602]: time="2026-01-20T00:49:09.241534937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:09.245439 containerd[1602]: time="2026-01-20T00:49:09.243588333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:09.245439 containerd[1602]: time="2026-01-20T00:49:09.243608861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:09.245439 containerd[1602]: time="2026-01-20T00:49:09.243746136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:09.473665 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.284 [INFO][4848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.284 [INFO][4848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" iface="eth0" netns="/var/run/netns/cni-612a3c95-fc2d-7631-ec13-b39159a87858" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.292 [INFO][4848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" iface="eth0" netns="/var/run/netns/cni-612a3c95-fc2d-7631-ec13-b39159a87858" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.292 [INFO][4848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" iface="eth0" netns="/var/run/netns/cni-612a3c95-fc2d-7631-ec13-b39159a87858" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.292 [INFO][4848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.292 [INFO][4848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.471 [INFO][4891] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.474 [INFO][4891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.474 [INFO][4891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.500 [WARNING][4891] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.504 [INFO][4891] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.513 [INFO][4891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:09.533430 containerd[1602]: 2026-01-20 00:49:09.524 [INFO][4848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:09.542823 containerd[1602]: time="2026-01-20T00:49:09.542543792Z" level=info msg="TearDown network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" successfully" Jan 20 00:49:09.542823 containerd[1602]: time="2026-01-20T00:49:09.542591580Z" level=info msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" returns successfully" Jan 20 00:49:09.547943 containerd[1602]: time="2026-01-20T00:49:09.544212700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-f4nqr,Uid:7e16703d-6774-4dbd-a448-684d9c6307e4,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:49:09.562371 systemd[1]: run-netns-cni\x2d612a3c95\x2dfc2d\x2d7631\x2dec13\x2db39159a87858.mount: Deactivated successfully. Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.320 [INFO][4843] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.320 [INFO][4843] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" iface="eth0" netns="/var/run/netns/cni-cea89c81-4e67-2c32-0c4f-5ac94f68f3df" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.321 [INFO][4843] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" iface="eth0" netns="/var/run/netns/cni-cea89c81-4e67-2c32-0c4f-5ac94f68f3df" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.322 [INFO][4843] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" iface="eth0" netns="/var/run/netns/cni-cea89c81-4e67-2c32-0c4f-5ac94f68f3df" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.322 [INFO][4843] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.322 [INFO][4843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.493 [INFO][4896] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.494 [INFO][4896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.513 [INFO][4896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.553 [WARNING][4896] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.553 [INFO][4896] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.565 [INFO][4896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:09.595127 containerd[1602]: 2026-01-20 00:49:09.587 [INFO][4843] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:09.602590 containerd[1602]: time="2026-01-20T00:49:09.602494963Z" level=info msg="TearDown network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" successfully" Jan 20 00:49:09.602590 containerd[1602]: time="2026-01-20T00:49:09.602535979Z" level=info msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" returns successfully" Jan 20 00:49:09.608780 kubelet[2786]: E0120 00:49:09.606803 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:09.616720 containerd[1602]: time="2026-01-20T00:49:09.611059422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xqh9g,Uid:bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5,Namespace:kube-system,Attempt:1,}" Jan 20 00:49:09.645592 systemd[1]: run-netns-cni\x2dcea89c81\x2d4e67\x2d2c32\x2d0c4f\x2d5ac94f68f3df.mount: Deactivated successfully. Jan 20 00:49:09.692562 containerd[1602]: time="2026-01-20T00:49:09.691229249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h28hk,Uid:1f255c2e-3546-405d-a567-940c6cad406e,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b\"" Jan 20 00:49:09.698028 kubelet[2786]: E0120 00:49:09.697680 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:09.711663 containerd[1602]: time="2026-01-20T00:49:09.710725913Z" level=info msg="CreateContainer within sandbox \"f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:49:09.797190 containerd[1602]: time="2026-01-20T00:49:09.791165268Z" level=info msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" Jan 20 00:49:09.982171 containerd[1602]: time="2026-01-20T00:49:09.981821501Z" level=info msg="CreateContainer within sandbox \"f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2db0633dedd9c9955e80b1f1e0d2fe4aa61df3b6aff9d79b5ed625763bc16dd\"" Jan 20 00:49:09.990866 containerd[1602]: time="2026-01-20T00:49:09.990832709Z" level=info msg="StartContainer for \"c2db0633dedd9c9955e80b1f1e0d2fe4aa61df3b6aff9d79b5ed625763bc16dd\"" Jan 20 00:49:10.467473 containerd[1602]: time="2026-01-20T00:49:10.467413417Z" level=info msg="StartContainer for \"c2db0633dedd9c9955e80b1f1e0d2fe4aa61df3b6aff9d79b5ed625763bc16dd\" returns successfully" Jan 20 00:49:10.525940 systemd-networkd[1264]: cali39accc5aeb4: Link UP Jan 20 00:49:10.542563 systemd-networkd[1264]: cali39accc5aeb4: Gained carrier Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.069 [INFO][4948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.074 [INFO][4948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" iface="eth0" netns="/var/run/netns/cni-9410dd3a-f953-bcb4-9fb0-b1577d801627" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.075 [INFO][4948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" iface="eth0" netns="/var/run/netns/cni-9410dd3a-f953-bcb4-9fb0-b1577d801627" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.076 [INFO][4948] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" iface="eth0" netns="/var/run/netns/cni-9410dd3a-f953-bcb4-9fb0-b1577d801627" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.076 [INFO][4948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.077 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.308 [INFO][4987] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.309 [INFO][4987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.490 [INFO][4987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.513 [WARNING][4987] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.513 [INFO][4987] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.522 [INFO][4987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:10.558852 containerd[1602]: 2026-01-20 00:49:10.547 [INFO][4948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:10.586170 systemd[1]: run-netns-cni\x2d9410dd3a\x2df953\x2dbcb4\x2d9fb0\x2db1577d801627.mount: Deactivated successfully. Jan 20 00:49:10.588537 containerd[1602]: time="2026-01-20T00:49:10.587644780Z" level=info msg="TearDown network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" successfully" Jan 20 00:49:10.588537 containerd[1602]: time="2026-01-20T00:49:10.587689303Z" level=info msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" returns successfully" Jan 20 00:49:10.600876 containerd[1602]: time="2026-01-20T00:49:10.595744258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmzpv,Uid:9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a,Namespace:calico-system,Attempt:1,}" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:09.982 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0 calico-apiserver-7ddd4777cd- calico-apiserver 7e16703d-6774-4dbd-a448-684d9c6307e4 1124 0 2026-01-20 00:48:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ddd4777cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7ddd4777cd-f4nqr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39accc5aeb4 [] [] }} ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:09.986 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.287 [INFO][4973] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" HandleID="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.287 [INFO][4973] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" HandleID="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000344b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7ddd4777cd-f4nqr", "timestamp":"2026-01-20 00:49:10.287072012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.287 [INFO][4973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.287 [INFO][4973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.287 [INFO][4973] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.322 [INFO][4973] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.351 [INFO][4973] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.383 [INFO][4973] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.399 [INFO][4973] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.408 [INFO][4973] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.408 [INFO][4973] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.414 [INFO][4973] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.447 [INFO][4973] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.489 [INFO][4973] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.489 [INFO][4973] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" host="localhost" Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.489 [INFO][4973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:10.635430 containerd[1602]: 2026-01-20 00:49:10.489 [INFO][4973] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" HandleID="k8s-pod-network.b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.643401 containerd[1602]: 2026-01-20 00:49:10.504 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e16703d-6774-4dbd-a448-684d9c6307e4", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7ddd4777cd-f4nqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39accc5aeb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:10.643401 containerd[1602]: 2026-01-20 00:49:10.508 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.643401 containerd[1602]: 2026-01-20 00:49:10.508 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39accc5aeb4 ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.643401 containerd[1602]: 2026-01-20 00:49:10.538 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.643401 containerd[1602]: 2026-01-20 00:49:10.546 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e16703d-6774-4dbd-a448-684d9c6307e4", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a", Pod:"calico-apiserver-7ddd4777cd-f4nqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39accc5aeb4", MAC:"66:14:62:42:a2:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:10.643401 containerd[1602]: 2026-01-20 00:49:10.610 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-f4nqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:10.790268 containerd[1602]: time="2026-01-20T00:49:10.787377672Z" level=info msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" Jan 20 00:49:10.795151 containerd[1602]: time="2026-01-20T00:49:10.787786038Z" level=info msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" Jan 20 00:49:10.799663 containerd[1602]: time="2026-01-20T00:49:10.787905982Z" level=info msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" Jan 20 00:49:10.866505 containerd[1602]: time="2026-01-20T00:49:10.866369103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:10.866753 containerd[1602]: time="2026-01-20T00:49:10.866471685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:10.866753 containerd[1602]: time="2026-01-20T00:49:10.866509535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:10.867313 containerd[1602]: time="2026-01-20T00:49:10.866681857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:10.880411 systemd-networkd[1264]: calie90db2bf6a4: Link UP Jan 20 00:49:10.884303 systemd-networkd[1264]: calid78bf62c3b6: Gained IPv6LL Jan 20 00:49:10.894173 systemd-networkd[1264]: calie90db2bf6a4: Gained carrier Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.100 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0 coredns-668d6bf9bc- kube-system bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5 1125 0 2026-01-20 00:47:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xqh9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie90db2bf6a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.100 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.370 [INFO][4998] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" HandleID="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.371 [INFO][4998] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" HandleID="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xqh9g", "timestamp":"2026-01-20 00:49:10.370863384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.372 [INFO][4998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.525 [INFO][4998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.525 [INFO][4998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.613 [INFO][4998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.657 [INFO][4998] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.703 [INFO][4998] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.718 [INFO][4998] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.730 [INFO][4998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.731 [INFO][4998] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.760 [INFO][4998] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.804 [INFO][4998] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.851 [INFO][4998] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.854 [INFO][4998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" host="localhost" Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.854 [INFO][4998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:11.204614 containerd[1602]: 2026-01-20 00:49:10.854 [INFO][4998] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" HandleID="k8s-pod-network.81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.206145 kubelet[2786]: E0120 00:49:11.197656 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:11.209437 containerd[1602]: 2026-01-20 00:49:10.862 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xqh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie90db2bf6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:11.209437 containerd[1602]: 2026-01-20 00:49:10.863 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.209437 containerd[1602]: 2026-01-20 00:49:10.864 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie90db2bf6a4 ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.209437 containerd[1602]: 2026-01-20 00:49:10.938 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.209437 containerd[1602]: 2026-01-20 00:49:10.956 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c", Pod:"coredns-668d6bf9bc-xqh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie90db2bf6a4", MAC:"3a:8d:f9:d1:de:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:11.209437 containerd[1602]: 2026-01-20 00:49:11.059 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c" Namespace="kube-system" Pod="coredns-668d6bf9bc-xqh9g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:11.251826 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:11.281311 kubelet[2786]: I0120 00:49:11.280742 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h28hk" podStartSLOduration=79.280718798 podStartE2EDuration="1m19.280718798s" podCreationTimestamp="2026-01-20 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:49:11.280110985 +0000 UTC m=+81.887901685" watchObservedRunningTime="2026-01-20 00:49:11.280718798 +0000 UTC m=+81.888509498" Jan 20 00:49:11.671543 containerd[1602]: time="2026-01-20T00:49:11.645922936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:11.671543 containerd[1602]: time="2026-01-20T00:49:11.647533116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:11.671543 containerd[1602]: time="2026-01-20T00:49:11.647562290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:11.671543 containerd[1602]: time="2026-01-20T00:49:11.648336302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.359 [INFO][5102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.361 [INFO][5102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" iface="eth0" netns="/var/run/netns/cni-7ad1e068-9fc6-637d-7a3b-8406cee15eb3" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.362 [INFO][5102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" iface="eth0" netns="/var/run/netns/cni-7ad1e068-9fc6-637d-7a3b-8406cee15eb3" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.363 [INFO][5102] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" iface="eth0" netns="/var/run/netns/cni-7ad1e068-9fc6-637d-7a3b-8406cee15eb3" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.364 [INFO][5102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.365 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.652 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.656 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.656 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.692 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.692 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.702 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:11.750551 containerd[1602]: 2026-01-20 00:49:11.741 [INFO][5102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:11.756091 containerd[1602]: time="2026-01-20T00:49:11.755782836Z" level=info msg="TearDown network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" successfully" Jan 20 00:49:11.756457 containerd[1602]: time="2026-01-20T00:49:11.756367535Z" level=info msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" returns successfully" Jan 20 00:49:11.758861 containerd[1602]: time="2026-01-20T00:49:11.758794700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-f4nqr,Uid:7e16703d-6774-4dbd-a448-684d9c6307e4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a\"" Jan 20 00:49:11.766109 containerd[1602]: time="2026-01-20T00:49:11.762159810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdf5b57-92x4l,Uid:c6f27543-10cf-4ae1-9e7a-a66dba01cb01,Namespace:calico-system,Attempt:1,}" Jan 20 00:49:11.764834 systemd[1]: run-netns-cni\x2d7ad1e068\x2d9fc6\x2d637d\x2d7a3b\x2d8406cee15eb3.mount: Deactivated successfully. Jan 20 00:49:11.782208 systemd-networkd[1264]: cali39accc5aeb4: Gained IPv6LL Jan 20 00:49:11.843460 containerd[1602]: time="2026-01-20T00:49:11.791705748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:11.969551 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:12.017064 systemd-networkd[1264]: cali3520f3dc012: Link UP Jan 20 00:49:12.023526 systemd-networkd[1264]: cali3520f3dc012: Gained carrier Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.554 [INFO][5104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.554 [INFO][5104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" iface="eth0" netns="/var/run/netns/cni-ff32db33-6659-310a-8f18-34791c452ef6" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.557 [INFO][5104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" iface="eth0" netns="/var/run/netns/cni-ff32db33-6659-310a-8f18-34791c452ef6" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.566 [INFO][5104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" iface="eth0" netns="/var/run/netns/cni-ff32db33-6659-310a-8f18-34791c452ef6" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.585 [INFO][5104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.585 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.798 [INFO][5195] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.864 [INFO][5195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.958 [INFO][5195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.985 [WARNING][5195] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.985 [INFO][5195] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:11.997 [INFO][5195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:12.048764 containerd[1602]: 2026-01-20 00:49:12.025 [INFO][5104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:49:12.057856 containerd[1602]: time="2026-01-20T00:49:12.054147671Z" level=info msg="TearDown network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" successfully" Jan 20 00:49:12.057856 containerd[1602]: time="2026-01-20T00:49:12.054234192Z" level=info msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" returns successfully" Jan 20 00:49:12.077474 containerd[1602]: time="2026-01-20T00:49:12.077416873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-jcj86,Uid:303ab104-f18e-4de9-832d-feef41e44244,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.042 [INFO][5049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vmzpv-eth0 goldmane-666569f655- calico-system 9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a 1133 0 2026-01-20 00:48:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vmzpv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3520f3dc012 [] [] }} ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.043 [INFO][5049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.699 [INFO][5143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" HandleID="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.700 [INFO][5143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" HandleID="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000244920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vmzpv", "timestamp":"2026-01-20 00:49:11.699378025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.700 [INFO][5143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.702 [INFO][5143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.702 [INFO][5143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.732 [INFO][5143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.764 [INFO][5143] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.806 [INFO][5143] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.849 [INFO][5143] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.860 [INFO][5143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.860 [INFO][5143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.889 [INFO][5143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2 Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.915 [INFO][5143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.957 [INFO][5143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.957 [INFO][5143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" host="localhost" Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.957 [INFO][5143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:12.102095 containerd[1602]: 2026-01-20 00:49:11.957 [INFO][5143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" HandleID="k8s-pod-network.f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.106021 containerd[1602]: 2026-01-20 00:49:11.974 [INFO][5049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vmzpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vmzpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3520f3dc012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:12.106021 containerd[1602]: 2026-01-20 00:49:11.974 [INFO][5049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.106021 containerd[1602]: 2026-01-20 00:49:11.974 [INFO][5049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3520f3dc012 ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.106021 containerd[1602]: 2026-01-20 00:49:12.012 [INFO][5049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.106021 containerd[1602]: 2026-01-20 00:49:12.018 [INFO][5049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vmzpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2", Pod:"goldmane-666569f655-vmzpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3520f3dc012", MAC:"12:59:f2:af:f8:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:12.106021 containerd[1602]: 2026-01-20 00:49:12.072 [INFO][5049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2" Namespace="calico-system" Pod="goldmane-666569f655-vmzpv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.413 [INFO][5099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.414 [INFO][5099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" iface="eth0" netns="/var/run/netns/cni-a120ce76-22a6-5ea9-9d0d-70de0090b40d" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.429 [INFO][5099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" iface="eth0" netns="/var/run/netns/cni-a120ce76-22a6-5ea9-9d0d-70de0090b40d" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.431 [INFO][5099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" iface="eth0" netns="/var/run/netns/cni-a120ce76-22a6-5ea9-9d0d-70de0090b40d" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.431 [INFO][5099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.431 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.866 [INFO][5180] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:11.867 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:12.001 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:12.052 [WARNING][5180] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:12.052 [INFO][5180] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:12.094 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:12.115312 containerd[1602]: 2026-01-20 00:49:12.102 [INFO][5099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:12.126651 containerd[1602]: time="2026-01-20T00:49:12.126184248Z" level=info msg="TearDown network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" successfully" Jan 20 00:49:12.126651 containerd[1602]: time="2026-01-20T00:49:12.126226958Z" level=info msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" returns successfully" Jan 20 00:49:12.132418 containerd[1602]: time="2026-01-20T00:49:12.128834525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7748477466-xqhsk,Uid:7442950d-347c-4ccb-839f-bbcef74b512f,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:49:12.141465 containerd[1602]: time="2026-01-20T00:49:12.140725939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:12.144906 containerd[1602]: time="2026-01-20T00:49:12.144857430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xqh9g,Uid:bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c\"" Jan 20 00:49:12.206871 kubelet[2786]: E0120 00:49:12.206717 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:12.251095 containerd[1602]: time="2026-01-20T00:49:12.248800164Z" level=info msg="CreateContainer within sandbox \"81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:49:12.268254 containerd[1602]: time="2026-01-20T00:49:12.268202915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:12.275378 containerd[1602]: time="2026-01-20T00:49:12.275321188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:12.275912 kubelet[2786]: E0120 00:49:12.275865 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:12.276147 kubelet[2786]: E0120 00:49:12.276108 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:12.279672 kubelet[2786]: E0120 00:49:12.279603 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-f4nqr_calico-apiserver(7e16703d-6774-4dbd-a448-684d9c6307e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:12.281909 kubelet[2786]: E0120 00:49:12.281803 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:49:12.289184 systemd[1]: run-netns-cni\x2da120ce76\x2d22a6\x2d5ea9\x2d9d0d\x2d70de0090b40d.mount: Deactivated successfully. Jan 20 00:49:12.291793 systemd[1]: run-netns-cni\x2dff32db33\x2d6659\x2d310a\x2d8f18\x2d34791c452ef6.mount: Deactivated successfully. Jan 20 00:49:12.294559 kubelet[2786]: E0120 00:49:12.293505 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:12.380726 containerd[1602]: time="2026-01-20T00:49:12.380623102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:12.380890 containerd[1602]: time="2026-01-20T00:49:12.380697420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:12.380890 containerd[1602]: time="2026-01-20T00:49:12.380717788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:12.380890 containerd[1602]: time="2026-01-20T00:49:12.380843061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:12.473696 containerd[1602]: time="2026-01-20T00:49:12.473610967Z" level=info msg="CreateContainer within sandbox \"81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da02143d657e7ea9ff69023c17e6af5af3683a88bff7129d333111b9fc65d04b\"" Jan 20 00:49:12.475198 containerd[1602]: time="2026-01-20T00:49:12.475037777Z" level=info msg="StartContainer for \"da02143d657e7ea9ff69023c17e6af5af3683a88bff7129d333111b9fc65d04b\"" Jan 20 00:49:12.522415 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:12.803742 systemd-networkd[1264]: calie90db2bf6a4: Gained IPv6LL Jan 20 00:49:12.886196 containerd[1602]: time="2026-01-20T00:49:12.884626151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmzpv,Uid:9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a,Namespace:calico-system,Attempt:1,} returns sandbox id \"f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2\"" Jan 20 00:49:12.899864 systemd-networkd[1264]: cali408fdbbab03: Link UP Jan 20 00:49:12.905930 systemd-networkd[1264]: cali408fdbbab03: Gained carrier Jan 20 00:49:12.959474 containerd[1602]: time="2026-01-20T00:49:12.949185584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:49:12.968657 containerd[1602]: time="2026-01-20T00:49:12.962596399Z" level=info msg="StartContainer for \"da02143d657e7ea9ff69023c17e6af5af3683a88bff7129d333111b9fc65d04b\" returns successfully" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.283 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0 calico-kube-controllers-55cdf5b57- calico-system c6f27543-10cf-4ae1-9e7a-a66dba01cb01 1147 0 2026-01-20 00:48:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55cdf5b57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55cdf5b57-92x4l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali408fdbbab03 [] [] }} ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.284 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.567 [INFO][5291] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" HandleID="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.568 [INFO][5291] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" HandleID="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042b0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55cdf5b57-92x4l", "timestamp":"2026-01-20 00:49:12.567583209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.569 [INFO][5291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.569 [INFO][5291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.569 [INFO][5291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.622 [INFO][5291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.676 [INFO][5291] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.723 [INFO][5291] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.730 [INFO][5291] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.743 [INFO][5291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.743 [INFO][5291] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.751 [INFO][5291] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17 Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.793 [INFO][5291] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.845 [INFO][5291] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.845 [INFO][5291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" host="localhost" Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.846 [INFO][5291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:13.083123 containerd[1602]: 2026-01-20 00:49:12.847 [INFO][5291] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" HandleID="k8s-pod-network.ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.088842 containerd[1602]: 2026-01-20 00:49:12.876 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0", GenerateName:"calico-kube-controllers-55cdf5b57-", Namespace:"calico-system", SelfLink:"", UID:"c6f27543-10cf-4ae1-9e7a-a66dba01cb01", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdf5b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55cdf5b57-92x4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408fdbbab03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:13.088842 containerd[1602]: 2026-01-20 00:49:12.884 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.088842 containerd[1602]: 2026-01-20 00:49:12.884 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali408fdbbab03 ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.088842 containerd[1602]: 2026-01-20 00:49:12.895 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.088842 containerd[1602]: 2026-01-20 00:49:12.951 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0", GenerateName:"calico-kube-controllers-55cdf5b57-", Namespace:"calico-system", SelfLink:"", UID:"c6f27543-10cf-4ae1-9e7a-a66dba01cb01", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdf5b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17", Pod:"calico-kube-controllers-55cdf5b57-92x4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408fdbbab03", MAC:"ba:63:bb:6e:2a:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:13.088842 containerd[1602]: 2026-01-20 00:49:13.051 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17" Namespace="calico-system" Pod="calico-kube-controllers-55cdf5b57-92x4l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:13.130792 containerd[1602]: time="2026-01-20T00:49:13.130741593Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:13.182075 containerd[1602]: time="2026-01-20T00:49:13.171026428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:49:13.182075 containerd[1602]: time="2026-01-20T00:49:13.171133917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:13.192657 kubelet[2786]: E0120 00:49:13.174761 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:49:13.192657 kubelet[2786]: E0120 00:49:13.174834 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:49:13.192657 kubelet[2786]: E0120 00:49:13.175086 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsnnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmzpv_calico-system(9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:13.192657 kubelet[2786]: E0120 00:49:13.180432 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:49:13.278429 containerd[1602]: time="2026-01-20T00:49:13.276655148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:13.278429 containerd[1602]: time="2026-01-20T00:49:13.276734868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:13.278429 containerd[1602]: time="2026-01-20T00:49:13.276751629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:13.284475 containerd[1602]: time="2026-01-20T00:49:13.279566364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:13.321914 kubelet[2786]: E0120 00:49:13.320518 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:13.370769 kubelet[2786]: E0120 00:49:13.361836 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:13.371855 systemd-networkd[1264]: calib91424c359c: Link UP Jan 20 00:49:13.374160 systemd-networkd[1264]: calib91424c359c: Gained carrier Jan 20 00:49:13.379672 kubelet[2786]: E0120 00:49:13.378586 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:49:13.385896 systemd-networkd[1264]: cali3520f3dc012: Gained IPv6LL Jan 20 00:49:13.386241 kubelet[2786]: E0120 00:49:13.385892 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.655 [INFO][5314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0 calico-apiserver-7ddd4777cd- calico-apiserver 303ab104-f18e-4de9-832d-feef41e44244 1155 0 2026-01-20 00:48:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ddd4777cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7ddd4777cd-jcj86 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib91424c359c [] [] }} ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.656 [INFO][5314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.861 [INFO][5387] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" HandleID="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.861 [INFO][5387] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" HandleID="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004aeb00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7ddd4777cd-jcj86", "timestamp":"2026-01-20 00:49:12.861122739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.861 [INFO][5387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.861 [INFO][5387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.861 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:12.950 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.057 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.121 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.152 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.195 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.195 [INFO][5387] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.214 [INFO][5387] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.259 [INFO][5387] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.312 [INFO][5387] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.314 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" host="localhost" Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.314 [INFO][5387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:13.542809 containerd[1602]: 2026-01-20 00:49:13.314 [INFO][5387] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" HandleID="k8s-pod-network.47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.553093 containerd[1602]: 2026-01-20 00:49:13.343 [INFO][5314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"303ab104-f18e-4de9-832d-feef41e44244", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7ddd4777cd-jcj86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib91424c359c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:13.553093 containerd[1602]: 2026-01-20 00:49:13.345 [INFO][5314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.553093 containerd[1602]: 2026-01-20 00:49:13.345 [INFO][5314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib91424c359c ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.553093 containerd[1602]: 2026-01-20 00:49:13.434 [INFO][5314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.553093 containerd[1602]: 2026-01-20 00:49:13.443 [INFO][5314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"303ab104-f18e-4de9-832d-feef41e44244", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d", Pod:"calico-apiserver-7ddd4777cd-jcj86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib91424c359c", MAC:"22:e2:b8:7a:ff:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:13.553093 containerd[1602]: 2026-01-20 00:49:13.494 [INFO][5314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddd4777cd-jcj86" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:49:13.561120 kubelet[2786]: I0120 00:49:13.536527 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xqh9g" podStartSLOduration=81.534035623 podStartE2EDuration="1m21.534035623s" podCreationTimestamp="2026-01-20 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:49:13.41921879 +0000 UTC m=+84.027009509" watchObservedRunningTime="2026-01-20 00:49:13.534035623 +0000 UTC m=+84.141826333" Jan 20 00:49:13.607084 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:13.746108 containerd[1602]: time="2026-01-20T00:49:13.744266692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:13.746108 containerd[1602]: time="2026-01-20T00:49:13.745465767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:13.746108 containerd[1602]: time="2026-01-20T00:49:13.745491405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:13.746108 containerd[1602]: time="2026-01-20T00:49:13.745647866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:13.816090 containerd[1602]: time="2026-01-20T00:49:13.815776542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdf5b57-92x4l,Uid:c6f27543-10cf-4ae1-9e7a-a66dba01cb01,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17\"" Jan 20 00:49:13.828930 containerd[1602]: time="2026-01-20T00:49:13.826201777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:49:13.839782 systemd-networkd[1264]: calia4987466b81: Link UP Jan 20 00:49:13.846075 systemd-networkd[1264]: calia4987466b81: Gained carrier Jan 20 00:49:13.952378 containerd[1602]: time="2026-01-20T00:49:13.952180536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:13.958350 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:13.962729 containerd[1602]: time="2026-01-20T00:49:13.962389570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:49:13.965739 containerd[1602]: time="2026-01-20T00:49:13.962879692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:49:13.966893 kubelet[2786]: E0120 00:49:13.966170 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:49:13.966893 kubelet[2786]: E0120 00:49:13.966236 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:49:13.966893 kubelet[2786]: E0120 00:49:13.966440 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55cdf5b57-92x4l_calico-system(c6f27543-10cf-4ae1-9e7a-a66dba01cb01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:13.968584 kubelet[2786]: E0120 00:49:13.968471 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:12.622 [INFO][5305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0 calico-apiserver-7748477466- calico-apiserver 7442950d-347c-4ccb-839f-bbcef74b512f 1151 0 2026-01-20 00:48:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7748477466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7748477466-xqhsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia4987466b81 [] [] }} ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:12.622 [INFO][5305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:12.879 [INFO][5375] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" HandleID="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:12.880 [INFO][5375] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" HandleID="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7748477466-xqhsk", "timestamp":"2026-01-20 00:49:12.87909155 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:12.880 [INFO][5375] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.315 [INFO][5375] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.324 [INFO][5375] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.407 [INFO][5375] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.563 [INFO][5375] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.598 [INFO][5375] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.625 [INFO][5375] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.642 [INFO][5375] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.642 [INFO][5375] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.660 [INFO][5375] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.695 [INFO][5375] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.758 [INFO][5375] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.759 [INFO][5375] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" host="localhost" Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.759 [INFO][5375] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:13.970746 containerd[1602]: 2026-01-20 00:49:13.759 [INFO][5375] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" HandleID="k8s-pod-network.a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:13.979173 containerd[1602]: 2026-01-20 00:49:13.788 [INFO][5305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0", GenerateName:"calico-apiserver-7748477466-", Namespace:"calico-apiserver", SelfLink:"", UID:"7442950d-347c-4ccb-839f-bbcef74b512f", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7748477466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7748477466-xqhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4987466b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:13.979173 containerd[1602]: 2026-01-20 00:49:13.788 [INFO][5305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:13.979173 containerd[1602]: 2026-01-20 00:49:13.788 [INFO][5305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4987466b81 ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:13.979173 containerd[1602]: 2026-01-20 00:49:13.847 [INFO][5305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:13.979173 containerd[1602]: 2026-01-20 00:49:13.850 [INFO][5305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0", GenerateName:"calico-apiserver-7748477466-", Namespace:"calico-apiserver", SelfLink:"", UID:"7442950d-347c-4ccb-839f-bbcef74b512f", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7748477466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db", Pod:"calico-apiserver-7748477466-xqhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4987466b81", MAC:"be:e6:f8:78:eb:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:13.979173 containerd[1602]: 2026-01-20 00:49:13.948 [INFO][5305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db" Namespace="calico-apiserver" Pod="calico-apiserver-7748477466-xqhsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:14.134221 containerd[1602]: time="2026-01-20T00:49:14.128385785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:14.134221 containerd[1602]: time="2026-01-20T00:49:14.128924509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:14.134221 containerd[1602]: time="2026-01-20T00:49:14.129120193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:14.134221 containerd[1602]: time="2026-01-20T00:49:14.130773503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:14.152861 containerd[1602]: time="2026-01-20T00:49:14.151496536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddd4777cd-jcj86,Uid:303ab104-f18e-4de9-832d-feef41e44244,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d\"" Jan 20 00:49:14.163095 containerd[1602]: time="2026-01-20T00:49:14.163034136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:14.260026 containerd[1602]: time="2026-01-20T00:49:14.259619781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:14.265324 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:49:14.278463 systemd-networkd[1264]: cali408fdbbab03: Gained IPv6LL Jan 20 00:49:14.279406 containerd[1602]: time="2026-01-20T00:49:14.276167945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:14.280493 kubelet[2786]: E0120 00:49:14.279732 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:14.280493 kubelet[2786]: E0120 00:49:14.279801 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:14.280493 kubelet[2786]: E0120 00:49:14.280033 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh9pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-jcj86_calico-apiserver(303ab104-f18e-4de9-832d-feef41e44244): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:14.283909 kubelet[2786]: E0120 00:49:14.283780 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:49:14.291935 containerd[1602]: time="2026-01-20T00:49:14.291839338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:14.397768 kubelet[2786]: E0120 00:49:14.396406 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:14.400844 kubelet[2786]: E0120 00:49:14.400246 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:49:14.400844 kubelet[2786]: E0120 00:49:14.400415 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:49:14.402779 kubelet[2786]: E0120 00:49:14.402425 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:49:14.420094 containerd[1602]: time="2026-01-20T00:49:14.413041900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7748477466-xqhsk,Uid:7442950d-347c-4ccb-839f-bbcef74b512f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db\"" Jan 20 00:49:14.432782 containerd[1602]: time="2026-01-20T00:49:14.432498706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:14.569886 containerd[1602]: time="2026-01-20T00:49:14.565240307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:14.583467 containerd[1602]: time="2026-01-20T00:49:14.582087205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:14.583467 containerd[1602]: time="2026-01-20T00:49:14.582211265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:14.584688 kubelet[2786]: E0120 00:49:14.584638 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:14.584872 kubelet[2786]: E0120 00:49:14.584837 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:14.586010 kubelet[2786]: E0120 00:49:14.585827 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpsm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7748477466-xqhsk_calico-apiserver(7442950d-347c-4ccb-839f-bbcef74b512f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:14.589700 kubelet[2786]: E0120 00:49:14.589573 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:49:14.721737 systemd-networkd[1264]: calib91424c359c: Gained IPv6LL Jan 20 00:49:15.171323 systemd-networkd[1264]: calia4987466b81: Gained IPv6LL Jan 20 00:49:15.439255 kubelet[2786]: E0120 00:49:15.437734 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:49:15.439255 kubelet[2786]: E0120 00:49:15.438267 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:49:15.444906 kubelet[2786]: E0120 00:49:15.440403 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:49:15.444906 kubelet[2786]: E0120 00:49:15.440444 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:16.453352 kubelet[2786]: E0120 00:49:16.451072 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:49:17.793441 containerd[1602]: time="2026-01-20T00:49:17.791081597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:49:17.909162 containerd[1602]: time="2026-01-20T00:49:17.908176780Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:17.919148 containerd[1602]: time="2026-01-20T00:49:17.917720494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:49:17.919148 containerd[1602]: time="2026-01-20T00:49:17.917877886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:49:17.919399 kubelet[2786]: E0120 00:49:17.918127 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:49:17.919399 kubelet[2786]: E0120 00:49:17.918189 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:49:17.919399 kubelet[2786]: E0120 00:49:17.918363 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd15b3b8928842729e5a367f173cdad6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:17.925377 containerd[1602]: time="2026-01-20T00:49:17.925336478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:49:18.022743 containerd[1602]: time="2026-01-20T00:49:18.022551360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:18.024683 containerd[1602]: time="2026-01-20T00:49:18.024531347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:49:18.024793 containerd[1602]: time="2026-01-20T00:49:18.024587752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:49:18.027378 kubelet[2786]: E0120 00:49:18.025142 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:49:18.027378 kubelet[2786]: E0120 00:49:18.025218 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:49:18.029021 kubelet[2786]: E0120 00:49:18.028524 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:18.030039 kubelet[2786]: E0120 00:49:18.029841 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:49:23.809051 containerd[1602]: time="2026-01-20T00:49:23.801014866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:49:23.915032 containerd[1602]: time="2026-01-20T00:49:23.914218431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:23.922394 containerd[1602]: time="2026-01-20T00:49:23.922181351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:49:23.922394 containerd[1602]: time="2026-01-20T00:49:23.922311648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:49:23.927330 kubelet[2786]: E0120 00:49:23.926677 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:49:23.927330 kubelet[2786]: E0120 00:49:23.926744 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:49:23.927330 kubelet[2786]: E0120 00:49:23.926880 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:23.937925 containerd[1602]: time="2026-01-20T00:49:23.937832179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:49:24.056640 containerd[1602]: time="2026-01-20T00:49:24.056156833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:24.068378 containerd[1602]: time="2026-01-20T00:49:24.068093648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:49:24.068378 containerd[1602]: time="2026-01-20T00:49:24.068241204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:49:24.068593 kubelet[2786]: E0120 00:49:24.068495 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:49:24.068593 kubelet[2786]: E0120 00:49:24.068570 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:49:24.070479 kubelet[2786]: E0120 00:49:24.068766 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:24.074705 kubelet[2786]: E0120 00:49:24.071262 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:49:24.778156 kubelet[2786]: E0120 00:49:24.777643 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:25.790373 containerd[1602]: time="2026-01-20T00:49:25.789939627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:49:25.906870 containerd[1602]: time="2026-01-20T00:49:25.904900685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:25.921077 containerd[1602]: time="2026-01-20T00:49:25.919811965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:49:25.921077 containerd[1602]: time="2026-01-20T00:49:25.919920448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:25.921276 kubelet[2786]: E0120 00:49:25.920207 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:49:25.921276 kubelet[2786]: E0120 00:49:25.920265 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:49:25.921276 kubelet[2786]: E0120 00:49:25.920486 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsnnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmzpv_calico-system(9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:25.926095 kubelet[2786]: E0120 00:49:25.924710 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:49:26.780767 containerd[1602]: time="2026-01-20T00:49:26.780382193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:49:26.877549 containerd[1602]: time="2026-01-20T00:49:26.876865350Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:26.885376 containerd[1602]: time="2026-01-20T00:49:26.880910297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:49:26.885376 containerd[1602]: time="2026-01-20T00:49:26.881096323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:49:26.885376 containerd[1602]: time="2026-01-20T00:49:26.884853538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:26.885650 kubelet[2786]: E0120 00:49:26.881356 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:49:26.885650 kubelet[2786]: E0120 00:49:26.881437 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:49:26.885650 kubelet[2786]: E0120 00:49:26.882280 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55cdf5b57-92x4l_calico-system(c6f27543-10cf-4ae1-9e7a-a66dba01cb01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:26.886240 kubelet[2786]: E0120 00:49:26.885686 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:49:26.988888 containerd[1602]: time="2026-01-20T00:49:26.988727339Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:26.993269 containerd[1602]: time="2026-01-20T00:49:26.992905568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:26.993269 containerd[1602]: time="2026-01-20T00:49:26.992920104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:26.996116 kubelet[2786]: E0120 00:49:26.995731 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:26.996116 kubelet[2786]: E0120 00:49:26.995806 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:26.996834 kubelet[2786]: E0120 00:49:26.996231 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh9pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-jcj86_calico-apiserver(303ab104-f18e-4de9-832d-feef41e44244): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:27.000418 kubelet[2786]: E0120 00:49:26.999448 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:49:27.786055 containerd[1602]: time="2026-01-20T00:49:27.785867795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:27.869895 containerd[1602]: time="2026-01-20T00:49:27.869510583Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:27.876729 containerd[1602]: time="2026-01-20T00:49:27.872363853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:27.876729 containerd[1602]: time="2026-01-20T00:49:27.874460088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:27.877546 kubelet[2786]: E0120 00:49:27.874824 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:27.877546 kubelet[2786]: E0120 00:49:27.875592 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:27.877546 kubelet[2786]: E0120 00:49:27.875780 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-f4nqr_calico-apiserver(7e16703d-6774-4dbd-a448-684d9c6307e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:27.881713 kubelet[2786]: E0120 00:49:27.879516 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:49:28.784670 kubelet[2786]: E0120 00:49:28.784367 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:49:29.799238 containerd[1602]: time="2026-01-20T00:49:29.797616774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:29.896541 containerd[1602]: time="2026-01-20T00:49:29.895719102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:29.915686 containerd[1602]: time="2026-01-20T00:49:29.911497670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:29.915686 containerd[1602]: time="2026-01-20T00:49:29.911661376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:29.919415 kubelet[2786]: E0120 00:49:29.918235 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:29.919415 kubelet[2786]: E0120 00:49:29.918343 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:29.919415 kubelet[2786]: E0120 00:49:29.918499 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpsm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7748477466-xqhsk_calico-apiserver(7442950d-347c-4ccb-839f-bbcef74b512f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:29.920195 kubelet[2786]: E0120 00:49:29.919598 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:49:29.949572 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:46470.service - OpenSSH per-connection server daemon (10.0.0.1:46470). Jan 20 00:49:30.186125 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 46470 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:30.188047 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:30.212435 systemd-logind[1586]: New session 10 of user core. Jan 20 00:49:30.232205 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:49:30.680625 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:30.689895 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:46470.service: Deactivated successfully. Jan 20 00:49:30.703378 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:49:30.717574 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:49:30.723380 systemd-logind[1586]: Removed session 10. Jan 20 00:49:32.466405 kubelet[2786]: E0120 00:49:32.463836 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:35.700491 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:60628.service - OpenSSH per-connection server daemon (10.0.0.1:60628). Jan 20 00:49:35.827240 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 60628 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:35.837871 sshd[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:35.861486 systemd-logind[1586]: New session 11 of user core. Jan 20 00:49:35.891161 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:49:36.332834 sshd[5649]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:36.345427 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:60628.service: Deactivated successfully. Jan 20 00:49:36.354071 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:49:36.356186 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:49:36.358018 systemd-logind[1586]: Removed session 11. Jan 20 00:49:39.805428 kubelet[2786]: E0120 00:49:39.802291 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:49:39.805428 kubelet[2786]: E0120 00:49:39.805272 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:49:39.805428 kubelet[2786]: E0120 00:49:39.805398 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:49:39.812128 kubelet[2786]: E0120 00:49:39.808425 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:49:39.812128 kubelet[2786]: E0120 00:49:39.810048 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:49:40.786582 containerd[1602]: time="2026-01-20T00:49:40.786199720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:49:40.886754 containerd[1602]: time="2026-01-20T00:49:40.886546591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:40.895820 containerd[1602]: time="2026-01-20T00:49:40.895694113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:49:40.897289 containerd[1602]: time="2026-01-20T00:49:40.895922878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:49:40.897410 kubelet[2786]: E0120 00:49:40.896092 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:49:40.897410 kubelet[2786]: E0120 00:49:40.896168 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:49:40.897410 kubelet[2786]: E0120 00:49:40.896377 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd15b3b8928842729e5a367f173cdad6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:40.898697 containerd[1602]: time="2026-01-20T00:49:40.898556227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:49:41.003621 containerd[1602]: time="2026-01-20T00:49:41.001910966Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:41.009530 containerd[1602]: time="2026-01-20T00:49:41.007904023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:49:41.009530 containerd[1602]: time="2026-01-20T00:49:41.008021447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:49:41.009756 kubelet[2786]: E0120 00:49:41.008408 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:49:41.009756 kubelet[2786]: E0120 00:49:41.008480 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:49:41.009756 kubelet[2786]: E0120 00:49:41.008634 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:41.011556 kubelet[2786]: E0120 00:49:41.011443 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:49:41.365637 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:60644.service - OpenSSH per-connection server daemon (10.0.0.1:60644). Jan 20 00:49:41.471238 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 60644 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:41.475536 sshd[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:41.491801 systemd-logind[1586]: New session 12 of user core. Jan 20 00:49:41.499526 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:49:41.831742 sshd[5666]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:41.849615 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:60644.service: Deactivated successfully. Jan 20 00:49:41.866778 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:49:41.868497 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:49:41.876571 systemd-logind[1586]: Removed session 12. Jan 20 00:49:44.784143 kubelet[2786]: E0120 00:49:44.784045 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:49:46.865936 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:52414.service - OpenSSH per-connection server daemon (10.0.0.1:52414). Jan 20 00:49:46.988471 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 52414 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:46.991468 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:47.012196 systemd-logind[1586]: New session 13 of user core. Jan 20 00:49:47.028949 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:49:47.790598 sshd[5689]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:47.804474 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:52414.service: Deactivated successfully. Jan 20 00:49:47.815852 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:49:47.831154 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:49:47.840642 systemd-logind[1586]: Removed session 13. Jan 20 00:49:49.843793 containerd[1602]: time="2026-01-20T00:49:49.843707854Z" level=info msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.033 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vmzpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2", Pod:"goldmane-666569f655-vmzpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3520f3dc012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.034 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.034 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" iface="eth0" netns="" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.034 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.034 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.258 [INFO][5724] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.260 [INFO][5724] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.260 [INFO][5724] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.617 [WARNING][5724] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.617 [INFO][5724] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.629 [INFO][5724] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:50.662913 containerd[1602]: 2026-01-20 00:49:50.649 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:50.667807 containerd[1602]: time="2026-01-20T00:49:50.663074456Z" level=info msg="TearDown network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" successfully" Jan 20 00:49:50.667807 containerd[1602]: time="2026-01-20T00:49:50.663114330Z" level=info msg="StopPodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" returns successfully" Jan 20 00:49:50.669133 containerd[1602]: time="2026-01-20T00:49:50.668655169Z" level=info msg="RemovePodSandbox for \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" Jan 20 00:49:50.672405 containerd[1602]: time="2026-01-20T00:49:50.672365175Z" level=info msg="Forcibly stopping sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\"" Jan 20 00:49:50.791554 containerd[1602]: time="2026-01-20T00:49:50.791464911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:49:50.890775 containerd[1602]: time="2026-01-20T00:49:50.890126266Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:50.898744 containerd[1602]: time="2026-01-20T00:49:50.897874598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:49:50.898744 containerd[1602]: time="2026-01-20T00:49:50.898081613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:49:50.900244 kubelet[2786]: E0120 00:49:50.899778 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:49:50.900244 kubelet[2786]: E0120 00:49:50.899856 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:49:50.900244 kubelet[2786]: E0120 00:49:50.900111 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55cdf5b57-92x4l_calico-system(c6f27543-10cf-4ae1-9e7a-a66dba01cb01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:50.906800 kubelet[2786]: E0120 00:49:50.906517 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:50.909 [WARNING][5742] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vmzpv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a", ResourceVersion:"1397", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4fc601b9ad087b159dc2dbe8e397a39e7cbe89a8d4122cf4710af94dbd144b2", Pod:"goldmane-666569f655-vmzpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3520f3dc012", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:50.912 [INFO][5742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:50.912 [INFO][5742] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" iface="eth0" netns="" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:50.913 [INFO][5742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:50.913 [INFO][5742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.011 [INFO][5750] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.016 [INFO][5750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.016 [INFO][5750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.030 [WARNING][5750] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.031 [INFO][5750] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" HandleID="k8s-pod-network.f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Workload="localhost-k8s-goldmane--666569f655--vmzpv-eth0" Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.035 [INFO][5750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:51.052545 containerd[1602]: 2026-01-20 00:49:51.043 [INFO][5742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e" Jan 20 00:49:51.052545 containerd[1602]: time="2026-01-20T00:49:51.052419536Z" level=info msg="TearDown network for sandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" successfully" Jan 20 00:49:51.078618 containerd[1602]: time="2026-01-20T00:49:51.077316078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:51.080117 containerd[1602]: time="2026-01-20T00:49:51.079459535Z" level=info msg="RemovePodSandbox \"f87f39c01bb639ed713aa7ba587d2eeff4199af65f84e75561ebe075235ab97e\" returns successfully" Jan 20 00:49:51.081094 containerd[1602]: time="2026-01-20T00:49:51.080947288Z" level=info msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.308 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c", Pod:"coredns-668d6bf9bc-xqh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie90db2bf6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.312 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.316 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" iface="eth0" netns="" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.316 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.316 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.492 [INFO][5777] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.494 [INFO][5777] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.495 [INFO][5777] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.511 [WARNING][5777] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.511 [INFO][5777] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.516 [INFO][5777] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:51.548486 containerd[1602]: 2026-01-20 00:49:51.538 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:51.554183 containerd[1602]: time="2026-01-20T00:49:51.549151165Z" level=info msg="TearDown network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" successfully" Jan 20 00:49:51.554183 containerd[1602]: time="2026-01-20T00:49:51.549429033Z" level=info msg="StopPodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" returns successfully" Jan 20 00:49:51.555322 containerd[1602]: time="2026-01-20T00:49:51.555144790Z" level=info msg="RemovePodSandbox for \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" Jan 20 00:49:51.555322 containerd[1602]: time="2026-01-20T00:49:51.555267347Z" level=info msg="Forcibly stopping sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\"" Jan 20 00:49:51.799446 containerd[1602]: time="2026-01-20T00:49:51.798440449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:51.918802 containerd[1602]: time="2026-01-20T00:49:51.918236814Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:51.933204 containerd[1602]: time="2026-01-20T00:49:51.932027203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:51.933661 containerd[1602]: time="2026-01-20T00:49:51.933116880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:51.934165 kubelet[2786]: E0120 00:49:51.934117 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:51.935907 kubelet[2786]: E0120 00:49:51.935467 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:51.936295 kubelet[2786]: E0120 00:49:51.936146 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh9pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-jcj86_calico-apiserver(303ab104-f18e-4de9-832d-feef41e44244): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:51.937648 kubelet[2786]: E0120 00:49:51.937568 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.791 [WARNING][5795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bd2ede38-0c37-420a-a2b6-8fd40bf2a8f5", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81b9215996e5ef7573cfa5f2f0772c021006f28b31dd0f94b4b22e650f0d012c", Pod:"coredns-668d6bf9bc-xqh9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie90db2bf6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.792 [INFO][5795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.792 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" iface="eth0" netns="" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.792 [INFO][5795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.792 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.951 [INFO][5804] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.953 [INFO][5804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.953 [INFO][5804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.966 [WARNING][5804] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.966 [INFO][5804] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" HandleID="k8s-pod-network.6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Workload="localhost-k8s-coredns--668d6bf9bc--xqh9g-eth0" Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:51.979 [INFO][5804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:52.044750 containerd[1602]: 2026-01-20 00:49:52.019 [INFO][5795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5" Jan 20 00:49:52.044750 containerd[1602]: time="2026-01-20T00:49:52.042824800Z" level=info msg="TearDown network for sandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" successfully" Jan 20 00:49:52.079420 containerd[1602]: time="2026-01-20T00:49:52.073482693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:52.079420 containerd[1602]: time="2026-01-20T00:49:52.077782961Z" level=info msg="RemovePodSandbox \"6252689100f8fe4514d46618cf286435e7790cad14baad809f85982bc9ab7bb5\" returns successfully" Jan 20 00:49:52.095127 containerd[1602]: time="2026-01-20T00:49:52.087404547Z" level=info msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.326 [WARNING][5822] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0", GenerateName:"calico-kube-controllers-55cdf5b57-", Namespace:"calico-system", SelfLink:"", UID:"c6f27543-10cf-4ae1-9e7a-a66dba01cb01", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdf5b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17", Pod:"calico-kube-controllers-55cdf5b57-92x4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408fdbbab03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.329 [INFO][5822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.329 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" iface="eth0" netns="" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.329 [INFO][5822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.329 [INFO][5822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.502 [INFO][5831] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.502 [INFO][5831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.503 [INFO][5831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.526 [WARNING][5831] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.529 [INFO][5831] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.544 [INFO][5831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:52.568666 containerd[1602]: 2026-01-20 00:49:52.554 [INFO][5822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:52.568666 containerd[1602]: time="2026-01-20T00:49:52.567794083Z" level=info msg="TearDown network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" successfully" Jan 20 00:49:52.568666 containerd[1602]: time="2026-01-20T00:49:52.567828547Z" level=info msg="StopPodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" returns successfully" Jan 20 00:49:52.575839 containerd[1602]: time="2026-01-20T00:49:52.572776982Z" level=info msg="RemovePodSandbox for \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" Jan 20 00:49:52.575839 containerd[1602]: time="2026-01-20T00:49:52.572826434Z" level=info msg="Forcibly stopping sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\"" Jan 20 00:49:52.869908 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:53082.service - OpenSSH per-connection server daemon (10.0.0.1:53082). Jan 20 00:49:53.150410 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 53082 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:53.156386 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:53.177883 systemd-logind[1586]: New session 14 of user core. Jan 20 00:49:53.189225 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:52.988 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0", GenerateName:"calico-kube-controllers-55cdf5b57-", Namespace:"calico-system", SelfLink:"", UID:"c6f27543-10cf-4ae1-9e7a-a66dba01cb01", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdf5b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba98ceee78b6fe8cc6abeea4954c320d7278b1d2fb8d4f93abc26ce919373f17", Pod:"calico-kube-controllers-55cdf5b57-92x4l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali408fdbbab03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:52.988 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:52.988 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" iface="eth0" netns="" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:52.994 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:52.994 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.143 [INFO][5857] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.143 [INFO][5857] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.144 [INFO][5857] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.161 [WARNING][5857] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.161 [INFO][5857] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" HandleID="k8s-pod-network.b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Workload="localhost-k8s-calico--kube--controllers--55cdf5b57--92x4l-eth0" Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.167 [INFO][5857] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:53.190029 containerd[1602]: 2026-01-20 00:49:53.181 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f" Jan 20 00:49:53.190029 containerd[1602]: time="2026-01-20T00:49:53.185680429Z" level=info msg="TearDown network for sandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" successfully" Jan 20 00:49:53.218674 containerd[1602]: time="2026-01-20T00:49:53.214564316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:53.218674 containerd[1602]: time="2026-01-20T00:49:53.214662350Z" level=info msg="RemovePodSandbox \"b559df794f5efb695915d07db9c969927ebeebc82dfd3f77597bb1732dfbd62f\" returns successfully" Jan 20 00:49:53.218674 containerd[1602]: time="2026-01-20T00:49:53.216868562Z" level=info msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.374 [WARNING][5877] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0", GenerateName:"calico-apiserver-7748477466-", Namespace:"calico-apiserver", SelfLink:"", UID:"7442950d-347c-4ccb-839f-bbcef74b512f", ResourceVersion:"1433", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7748477466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db", Pod:"calico-apiserver-7748477466-xqhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4987466b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.376 [INFO][5877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.376 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" iface="eth0" netns="" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.376 [INFO][5877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.376 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.714 [INFO][5894] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.719 [INFO][5894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.724 [INFO][5894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.767 [WARNING][5894] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.769 [INFO][5894] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.780 [INFO][5894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:53.796032 containerd[1602]: 2026-01-20 00:49:53.790 [INFO][5877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:53.796032 containerd[1602]: time="2026-01-20T00:49:53.795870265Z" level=info msg="TearDown network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" successfully" Jan 20 00:49:53.796032 containerd[1602]: time="2026-01-20T00:49:53.795899679Z" level=info msg="StopPodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" returns successfully" Jan 20 00:49:53.798118 containerd[1602]: time="2026-01-20T00:49:53.798086476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:49:53.798632 containerd[1602]: time="2026-01-20T00:49:53.798108672Z" level=info msg="RemovePodSandbox for \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" Jan 20 00:49:53.798632 containerd[1602]: time="2026-01-20T00:49:53.798523676Z" level=info msg="Forcibly stopping sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\"" Jan 20 00:49:53.832733 sshd[5853]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:53.858825 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:53082.service: Deactivated successfully. Jan 20 00:49:53.881875 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:49:53.886940 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:49:53.894376 systemd-logind[1586]: Removed session 14. Jan 20 00:49:53.914908 containerd[1602]: time="2026-01-20T00:49:53.914851826Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:53.922935 containerd[1602]: time="2026-01-20T00:49:53.920729052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:53.922935 containerd[1602]: time="2026-01-20T00:49:53.920673719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:49:53.928267 kubelet[2786]: E0120 00:49:53.928137 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:49:53.930866 kubelet[2786]: E0120 00:49:53.928601 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:49:53.931795 kubelet[2786]: E0120 00:49:53.931685 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsnnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmzpv_calico-system(9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:53.933119 kubelet[2786]: E0120 00:49:53.933052 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.045 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0", GenerateName:"calico-apiserver-7748477466-", Namespace:"calico-apiserver", SelfLink:"", UID:"7442950d-347c-4ccb-839f-bbcef74b512f", ResourceVersion:"1433", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7748477466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6d2872d338e0be9b92fb68721bdf3aaf838fbe454d7fe283d5788aed3f958db", Pod:"calico-apiserver-7748477466-xqhsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4987466b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.051 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.051 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" iface="eth0" netns="" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.051 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.051 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.246 [INFO][5922] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.247 [INFO][5922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.247 [INFO][5922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.280 [WARNING][5922] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.281 [INFO][5922] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" HandleID="k8s-pod-network.fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Workload="localhost-k8s-calico--apiserver--7748477466--xqhsk-eth0" Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.288 [INFO][5922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:54.314158 containerd[1602]: 2026-01-20 00:49:54.298 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e" Jan 20 00:49:54.314158 containerd[1602]: time="2026-01-20T00:49:54.311858529Z" level=info msg="TearDown network for sandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" successfully" Jan 20 00:49:54.334495 containerd[1602]: time="2026-01-20T00:49:54.334388604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:54.334642 containerd[1602]: time="2026-01-20T00:49:54.334502215Z" level=info msg="RemovePodSandbox \"fec51fc097e508a3f5d5f16b63279a7927a6b88488e5f1afa7a5d57b3234072e\" returns successfully" Jan 20 00:49:54.336093 containerd[1602]: time="2026-01-20T00:49:54.336056205Z" level=info msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" Jan 20 00:49:54.788129 containerd[1602]: time="2026-01-20T00:49:54.787309776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.610 [WARNING][5942] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" WorkloadEndpoint="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.623 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.623 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" iface="eth0" netns="" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.623 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.623 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.869 [INFO][5950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.869 [INFO][5950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.869 [INFO][5950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.946 [WARNING][5950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.947 [INFO][5950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.960 [INFO][5950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:54.988620 containerd[1602]: 2026-01-20 00:49:54.972 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:54.989795 containerd[1602]: time="2026-01-20T00:49:54.989737719Z" level=info msg="TearDown network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" successfully" Jan 20 00:49:54.989900 containerd[1602]: time="2026-01-20T00:49:54.989875795Z" level=info msg="StopPodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" returns successfully" Jan 20 00:49:54.990950 containerd[1602]: time="2026-01-20T00:49:54.990882152Z" level=info msg="RemovePodSandbox for \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" Jan 20 00:49:54.991115 containerd[1602]: time="2026-01-20T00:49:54.991087154Z" level=info msg="Forcibly stopping sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\"" Jan 20 00:49:55.004055 containerd[1602]: time="2026-01-20T00:49:55.001791926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:55.018830 containerd[1602]: time="2026-01-20T00:49:55.017891981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:49:55.018830 containerd[1602]: time="2026-01-20T00:49:55.018750382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:49:55.028717 kubelet[2786]: E0120 00:49:55.024093 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:55.028717 kubelet[2786]: E0120 00:49:55.024526 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:49:55.029571 containerd[1602]: time="2026-01-20T00:49:55.029254498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:49:55.041841 kubelet[2786]: E0120 00:49:55.038020 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-f4nqr_calico-apiserver(7e16703d-6774-4dbd-a448-684d9c6307e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:55.041841 kubelet[2786]: E0120 00:49:55.041294 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:49:55.129132 containerd[1602]: time="2026-01-20T00:49:55.129079378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:55.133337 containerd[1602]: time="2026-01-20T00:49:55.133274116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:49:55.133744 containerd[1602]: time="2026-01-20T00:49:55.133695431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:49:55.134259 kubelet[2786]: E0120 00:49:55.134195 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:49:55.134510 kubelet[2786]: E0120 00:49:55.134474 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:49:55.138937 kubelet[2786]: E0120 00:49:55.137944 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:55.152015 containerd[1602]: time="2026-01-20T00:49:55.150272185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:49:55.289167 containerd[1602]: time="2026-01-20T00:49:55.288728155Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:49:55.340555 containerd[1602]: time="2026-01-20T00:49:55.340001743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:49:55.340555 containerd[1602]: time="2026-01-20T00:49:55.340315591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:49:55.347636 kubelet[2786]: E0120 00:49:55.342881 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:49:55.347636 kubelet[2786]: E0120 00:49:55.343232 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:49:55.347636 kubelet[2786]: E0120 00:49:55.344530 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:49:55.347636 kubelet[2786]: E0120 00:49:55.346081 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.208 [WARNING][5968] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" WorkloadEndpoint="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.210 [INFO][5968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.210 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" iface="eth0" netns="" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.210 [INFO][5968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.210 [INFO][5968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.444 [INFO][5977] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.449 [INFO][5977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.450 [INFO][5977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.464 [WARNING][5977] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.464 [INFO][5977] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" HandleID="k8s-pod-network.9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Workload="localhost-k8s-whisker--57ccb4848f--ng25j-eth0" Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.471 [INFO][5977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:55.509022 containerd[1602]: 2026-01-20 00:49:55.483 [INFO][5968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171" Jan 20 00:49:55.509022 containerd[1602]: time="2026-01-20T00:49:55.503061845Z" level=info msg="TearDown network for sandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" successfully" Jan 20 00:49:55.610595 containerd[1602]: time="2026-01-20T00:49:55.604892301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:55.610595 containerd[1602]: time="2026-01-20T00:49:55.609124051Z" level=info msg="RemovePodSandbox \"9953f59fdf80b7462a7c3276351aedcc1d83dbe110cfdc9932d1dcb187c7b171\" returns successfully" Jan 20 00:49:55.622531 containerd[1602]: time="2026-01-20T00:49:55.622476263Z" level=info msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" Jan 20 00:49:55.796382 kubelet[2786]: E0120 00:49:55.795921 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:55.859 [WARNING][5999] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h28hk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f255c2e-3546-405d-a567-940c6cad406e", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b", Pod:"coredns-668d6bf9bc-h28hk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid78bf62c3b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:55.865 [INFO][5999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:55.865 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" iface="eth0" netns="" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:55.865 [INFO][5999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:55.865 [INFO][5999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.011 [INFO][6007] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.011 [INFO][6007] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.011 [INFO][6007] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.022 [WARNING][6007] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.022 [INFO][6007] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.027 [INFO][6007] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:56.044722 containerd[1602]: 2026-01-20 00:49:56.031 [INFO][5999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.045735 containerd[1602]: time="2026-01-20T00:49:56.045606608Z" level=info msg="TearDown network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" successfully" Jan 20 00:49:56.045897 containerd[1602]: time="2026-01-20T00:49:56.045808095Z" level=info msg="StopPodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" returns successfully" Jan 20 00:49:56.050399 containerd[1602]: time="2026-01-20T00:49:56.050322891Z" level=info msg="RemovePodSandbox for \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" Jan 20 00:49:56.051627 containerd[1602]: time="2026-01-20T00:49:56.051599310Z" level=info msg="Forcibly stopping sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\"" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.170 [WARNING][6025] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h28hk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f255c2e-3546-405d-a567-940c6cad406e", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 47, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5497f70df3acba2acffaa2178b7ec953b726d530f8916c7b676168c8f15192b", Pod:"coredns-668d6bf9bc-h28hk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid78bf62c3b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.172 [INFO][6025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.172 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" iface="eth0" netns="" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.172 [INFO][6025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.172 [INFO][6025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.242 [INFO][6034] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.242 [INFO][6034] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.243 [INFO][6034] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.298 [WARNING][6034] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.300 [INFO][6034] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" HandleID="k8s-pod-network.bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Workload="localhost-k8s-coredns--668d6bf9bc--h28hk-eth0" Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.314 [INFO][6034] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:56.375294 containerd[1602]: 2026-01-20 00:49:56.361 [INFO][6025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75" Jan 20 00:49:56.385277 containerd[1602]: time="2026-01-20T00:49:56.379152999Z" level=info msg="TearDown network for sandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" successfully" Jan 20 00:49:56.397142 containerd[1602]: time="2026-01-20T00:49:56.397083774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:56.397650 containerd[1602]: time="2026-01-20T00:49:56.397478771Z" level=info msg="RemovePodSandbox \"bcf749232bed02fa12f46a460924c35580a952dcee47d929ffe73986b0d57c75\" returns successfully" Jan 20 00:49:56.405739 containerd[1602]: time="2026-01-20T00:49:56.403778005Z" level=info msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" Jan 20 00:49:56.847032 kubelet[2786]: E0120 00:49:56.842539 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.644 [WARNING][6050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e16703d-6774-4dbd-a448-684d9c6307e4", ResourceVersion:"1484", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a", Pod:"calico-apiserver-7ddd4777cd-f4nqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39accc5aeb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.650 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.651 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" iface="eth0" netns="" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.651 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.651 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.872 [INFO][6059] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.874 [INFO][6059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.875 [INFO][6059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.893 [WARNING][6059] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.893 [INFO][6059] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.943 [INFO][6059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:56.971292 containerd[1602]: 2026-01-20 00:49:56.953 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:56.978045 containerd[1602]: time="2026-01-20T00:49:56.977444999Z" level=info msg="TearDown network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" successfully" Jan 20 00:49:56.978045 containerd[1602]: time="2026-01-20T00:49:56.977603966Z" level=info msg="StopPodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" returns successfully" Jan 20 00:49:56.986234 containerd[1602]: time="2026-01-20T00:49:56.986076210Z" level=info msg="RemovePodSandbox for \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" Jan 20 00:49:56.986923 containerd[1602]: time="2026-01-20T00:49:56.986506232Z" level=info msg="Forcibly stopping sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\"" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.335 [WARNING][6076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e16703d-6774-4dbd-a448-684d9c6307e4", ResourceVersion:"1484", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b04d9d3b9f39ada526f85295e16b729497723f01919d6596410eaa6e68a5ad4a", Pod:"calico-apiserver-7ddd4777cd-f4nqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39accc5aeb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.340 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.340 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" iface="eth0" netns="" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.340 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.340 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.530 [INFO][6086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.532 [INFO][6086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.532 [INFO][6086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.899 [WARNING][6086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:57.903 [INFO][6086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" HandleID="k8s-pod-network.2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--f4nqr-eth0" Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:58.092 [INFO][6086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:58.116246 containerd[1602]: 2026-01-20 00:49:58.103 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb" Jan 20 00:49:58.123695 containerd[1602]: time="2026-01-20T00:49:58.119575136Z" level=info msg="TearDown network for sandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" successfully" Jan 20 00:49:58.148152 containerd[1602]: time="2026-01-20T00:49:58.148045229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:49:58.148894 containerd[1602]: time="2026-01-20T00:49:58.148243329Z" level=info msg="RemovePodSandbox \"2d93cae1db49e5303739a1723c4431b0dad4461e93421f603704060832b224fb\" returns successfully" Jan 20 00:49:58.151722 containerd[1602]: time="2026-01-20T00:49:58.151517456Z" level=info msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.474 [WARNING][6104] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ffgch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"946b9e08-0972-42be-947f-c9b1fe484382", ResourceVersion:"1485", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d", Pod:"csi-node-driver-ffgch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb7ab818266", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.477 [INFO][6104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.477 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" iface="eth0" netns="" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.477 [INFO][6104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.477 [INFO][6104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.738 [INFO][6112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.738 [INFO][6112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.739 [INFO][6112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.754 [WARNING][6112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.755 [INFO][6112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.761 [INFO][6112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:49:58.784043 containerd[1602]: 2026-01-20 00:49:58.772 [INFO][6104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:49:58.784043 containerd[1602]: time="2026-01-20T00:49:58.783660864Z" level=info msg="TearDown network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" successfully" Jan 20 00:49:58.785765 containerd[1602]: time="2026-01-20T00:49:58.784989640Z" level=info msg="StopPodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" returns successfully" Jan 20 00:49:58.785765 containerd[1602]: time="2026-01-20T00:49:58.785649409Z" level=info msg="RemovePodSandbox for \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" Jan 20 00:49:58.785765 containerd[1602]: time="2026-01-20T00:49:58.785684464Z" level=info msg="Forcibly stopping sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\"" Jan 20 00:49:58.941402 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:53084.service - OpenSSH per-connection server daemon (10.0.0.1:53084). Jan 20 00:49:59.407724 sshd[6133]: Accepted publickey for core from 10.0.0.1 port 53084 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:59.429010 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:59.628262 systemd-logind[1586]: New session 15 of user core. Jan 20 00:49:59.638092 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:49:59.873251 containerd[1602]: time="2026-01-20T00:49:59.866702846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:49:59.577 [WARNING][6128] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ffgch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"946b9e08-0972-42be-947f-c9b1fe484382", ResourceVersion:"1485", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ecfe8f7f155538144cb033c5a30e97982f020194d3fcb77bca3d65d83e9fd9d", Pod:"csi-node-driver-ffgch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb7ab818266", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:49:59.606 [INFO][6128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:49:59.607 [INFO][6128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" iface="eth0" netns="" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:49:59.607 [INFO][6128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:49:59.607 [INFO][6128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.619 [INFO][6139] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.620 [INFO][6139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.620 [INFO][6139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.640 [WARNING][6139] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.640 [INFO][6139] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" HandleID="k8s-pod-network.82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Workload="localhost-k8s-csi--node--driver--ffgch-eth0" Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.650 [INFO][6139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:00.763471 containerd[1602]: 2026-01-20 00:50:00.751 [INFO][6128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a" Jan 20 00:50:00.764394 containerd[1602]: time="2026-01-20T00:50:00.764181589Z" level=info msg="TearDown network for sandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" successfully" Jan 20 00:50:00.777467 containerd[1602]: time="2026-01-20T00:50:00.775082833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:00.901339 containerd[1602]: time="2026-01-20T00:50:00.900653546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:50:00.905930 containerd[1602]: time="2026-01-20T00:50:00.902818163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:50:00.905930 containerd[1602]: time="2026-01-20T00:50:00.903237949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:50:00.906265 containerd[1602]: time="2026-01-20T00:50:00.906124847Z" level=info msg="RemovePodSandbox \"82d7c42abe388e32efdff5c9622d7e30a69f7b599c9a20c502ba4d84d3d7bc0a\" returns successfully" Jan 20 00:50:00.906852 kubelet[2786]: E0120 00:50:00.906687 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:00.906852 kubelet[2786]: E0120 00:50:00.906766 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:00.909335 containerd[1602]: time="2026-01-20T00:50:00.909041013Z" level=info msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" Jan 20 00:50:00.911440 kubelet[2786]: E0120 00:50:00.907053 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpsm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7748477466-xqhsk_calico-apiserver(7442950d-347c-4ccb-839f-bbcef74b512f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:00.911440 kubelet[2786]: E0120 00:50:00.911191 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:50:01.035480 sshd[6133]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:01.051520 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:53100.service - OpenSSH per-connection server daemon (10.0.0.1:53100). Jan 20 00:50:01.052446 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:53084.service: Deactivated successfully. Jan 20 00:50:01.062455 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:50:01.068916 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:50:01.087813 systemd-logind[1586]: Removed session 15. Jan 20 00:50:01.174191 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 53100 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:01.177837 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:01.214403 systemd-logind[1586]: New session 16 of user core. Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.119 [WARNING][6168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"303ab104-f18e-4de9-832d-feef41e44244", ResourceVersion:"1466", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d", Pod:"calico-apiserver-7ddd4777cd-jcj86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib91424c359c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.120 [INFO][6168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.120 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" iface="eth0" netns="" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.120 [INFO][6168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.121 [INFO][6168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.193 [INFO][6182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.194 [INFO][6182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.194 [INFO][6182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.208 [WARNING][6182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.208 [INFO][6182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.211 [INFO][6182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:01.225665 containerd[1602]: 2026-01-20 00:50:01.219 [INFO][6168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.230836 containerd[1602]: time="2026-01-20T00:50:01.225754185Z" level=info msg="TearDown network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" successfully" Jan 20 00:50:01.230836 containerd[1602]: time="2026-01-20T00:50:01.225798928Z" level=info msg="StopPodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" returns successfully" Jan 20 00:50:01.226737 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:50:01.238408 containerd[1602]: time="2026-01-20T00:50:01.238142248Z" level=info msg="RemovePodSandbox for \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" Jan 20 00:50:01.238408 containerd[1602]: time="2026-01-20T00:50:01.238278522Z" level=info msg="Forcibly stopping sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\"" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.377 [WARNING][6200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0", GenerateName:"calico-apiserver-7ddd4777cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"303ab104-f18e-4de9-832d-feef41e44244", ResourceVersion:"1466", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddd4777cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e5c2765e0839e760624a668b181378e1c38764e209d30e1d9ebbef3f1bed7d", Pod:"calico-apiserver-7ddd4777cd-jcj86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib91424c359c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.378 [INFO][6200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.378 [INFO][6200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" iface="eth0" netns="" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.378 [INFO][6200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.378 [INFO][6200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.493 [INFO][6214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.497 [INFO][6214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.497 [INFO][6214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.525 [WARNING][6214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.525 [INFO][6214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" HandleID="k8s-pod-network.fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Workload="localhost-k8s-calico--apiserver--7ddd4777cd--jcj86-eth0" Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.533 [INFO][6214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:01.556646 containerd[1602]: 2026-01-20 00:50:01.546 [INFO][6200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b" Jan 20 00:50:01.556646 containerd[1602]: time="2026-01-20T00:50:01.552562976Z" level=info msg="TearDown network for sandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" successfully" Jan 20 00:50:01.573063 containerd[1602]: time="2026-01-20T00:50:01.572805765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:50:01.573472 containerd[1602]: time="2026-01-20T00:50:01.573309604Z" level=info msg="RemovePodSandbox \"fee4b0f5baf2ae6d47a52286da0dd9340315f1b2793995e34eaf1ac531891c3b\" returns successfully" Jan 20 00:50:01.718121 sshd[6174]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:01.745532 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:53112.service - OpenSSH per-connection server daemon (10.0.0.1:53112). Jan 20 00:50:01.746606 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:53100.service: Deactivated successfully. Jan 20 00:50:01.756821 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:50:01.762698 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:50:01.782189 systemd-logind[1586]: Removed session 16. Jan 20 00:50:01.852339 sshd[6223]: Accepted publickey for core from 10.0.0.1 port 53112 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:01.857947 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:01.873562 systemd-logind[1586]: New session 17 of user core. Jan 20 00:50:01.887994 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:50:02.227126 sshd[6223]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:02.239636 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:53112.service: Deactivated successfully. Jan 20 00:50:02.254654 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:50:02.254896 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:50:02.261434 systemd-logind[1586]: Removed session 17. Jan 20 00:50:04.372840 kubelet[2786]: E0120 00:50:04.372744 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:50:04.376549 kubelet[2786]: E0120 00:50:04.374211 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:50:05.783118 kubelet[2786]: E0120 00:50:05.781753 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:50:06.838911 kubelet[2786]: E0120 00:50:06.838726 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:50:09.519863 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:44480.service - OpenSSH per-connection server daemon (10.0.0.1:44480). Jan 20 00:50:09.531537 kubelet[2786]: E0120 00:50:09.529484 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:50:09.531537 kubelet[2786]: E0120 00:50:09.530934 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:50:09.735499 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 44480 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:09.742349 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:09.771165 systemd-logind[1586]: New session 18 of user core. Jan 20 00:50:09.780727 kubelet[2786]: E0120 00:50:09.777623 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:09.781469 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:50:11.900820 sshd[6266]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:11.919615 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:44480.service: Deactivated successfully. Jan 20 00:50:11.932737 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:50:11.933542 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:50:11.935947 systemd-logind[1586]: Removed session 18. Jan 20 00:50:13.793130 kubelet[2786]: E0120 00:50:13.793073 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:50:15.794783 kubelet[2786]: E0120 00:50:15.794731 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:50:17.151855 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:48776.service - OpenSSH per-connection server daemon (10.0.0.1:48776). Jan 20 00:50:17.227221 sshd[6282]: Accepted publickey for core from 10.0.0.1 port 48776 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:17.231590 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:17.249895 systemd-logind[1586]: New session 19 of user core. Jan 20 00:50:17.262192 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:50:17.637381 sshd[6282]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:17.646697 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:48776.service: Deactivated successfully. Jan 20 00:50:17.660218 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:50:17.660775 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:50:17.663249 systemd-logind[1586]: Removed session 19. Jan 20 00:50:17.814380 kubelet[2786]: E0120 00:50:17.812928 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:18.818861 kubelet[2786]: E0120 00:50:18.815710 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:50:18.818861 kubelet[2786]: E0120 00:50:18.815847 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:50:20.779538 kubelet[2786]: E0120 00:50:20.778559 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:20.793263 kubelet[2786]: E0120 00:50:20.786839 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:50:21.837285 containerd[1602]: time="2026-01-20T00:50:21.835083740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:50:21.960378 containerd[1602]: time="2026-01-20T00:50:21.960304514Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:21.968456 containerd[1602]: time="2026-01-20T00:50:21.966265556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:50:21.968456 containerd[1602]: time="2026-01-20T00:50:21.966426707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:50:21.968742 kubelet[2786]: E0120 00:50:21.967061 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:50:21.968742 kubelet[2786]: E0120 00:50:21.967244 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:50:21.968742 kubelet[2786]: E0120 00:50:21.967734 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd15b3b8928842729e5a367f173cdad6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:21.974357 containerd[1602]: time="2026-01-20T00:50:21.974306479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:50:22.094565 containerd[1602]: time="2026-01-20T00:50:22.094334605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:22.100522 containerd[1602]: time="2026-01-20T00:50:22.100362593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:50:22.100522 containerd[1602]: time="2026-01-20T00:50:22.100454010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:50:22.100739 kubelet[2786]: E0120 00:50:22.100669 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:50:22.100816 kubelet[2786]: E0120 00:50:22.100744 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:50:22.103187 kubelet[2786]: E0120 00:50:22.100899 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:22.103187 kubelet[2786]: E0120 00:50:22.102868 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:50:22.668132 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:34304.service - OpenSSH per-connection server daemon (10.0.0.1:34304). Jan 20 00:50:22.785638 kubelet[2786]: E0120 00:50:22.782225 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:50:22.799113 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 34304 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:22.810152 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:22.844918 systemd-logind[1586]: New session 20 of user core. Jan 20 00:50:22.871188 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:50:23.314891 sshd[6298]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:23.327487 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:50:23.328879 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:34304.service: Deactivated successfully. Jan 20 00:50:23.338576 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:50:23.351626 systemd-logind[1586]: Removed session 20. Jan 20 00:50:24.783350 kubelet[2786]: E0120 00:50:24.782673 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:26.983524 kubelet[2786]: E0120 00:50:26.980080 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:50:26.991034 kubelet[2786]: E0120 00:50:26.990630 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:50:28.351861 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:34312.service - OpenSSH per-connection server daemon (10.0.0.1:34312). Jan 20 00:50:28.530666 sshd[6321]: Accepted publickey for core from 10.0.0.1 port 34312 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:28.536947 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:28.553865 systemd-logind[1586]: New session 21 of user core. Jan 20 00:50:28.566655 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:50:28.783497 kubelet[2786]: E0120 00:50:28.777386 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:29.011786 sshd[6321]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:29.036180 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:34312.service: Deactivated successfully. Jan 20 00:50:29.071243 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:50:29.082792 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:50:29.087181 systemd-logind[1586]: Removed session 21. Jan 20 00:50:30.784797 kubelet[2786]: E0120 00:50:30.777949 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:31.786240 containerd[1602]: time="2026-01-20T00:50:31.784658874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:50:31.791863 kubelet[2786]: E0120 00:50:31.785714 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:50:31.891177 containerd[1602]: time="2026-01-20T00:50:31.890881738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:31.899674 containerd[1602]: time="2026-01-20T00:50:31.898541278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:50:31.900081 containerd[1602]: time="2026-01-20T00:50:31.898771249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:50:31.901615 kubelet[2786]: E0120 00:50:31.901530 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:50:31.901729 kubelet[2786]: E0120 00:50:31.901618 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:50:31.905570 kubelet[2786]: E0120 00:50:31.903168 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55cdf5b57-92x4l_calico-system(c6f27543-10cf-4ae1-9e7a-a66dba01cb01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:31.906696 kubelet[2786]: E0120 00:50:31.906077 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:50:34.047046 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:33864.service - OpenSSH per-connection server daemon (10.0.0.1:33864). Jan 20 00:50:34.323776 sshd[6361]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:34.333858 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:34.365706 systemd-logind[1586]: New session 22 of user core. Jan 20 00:50:34.378661 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:50:34.785780 containerd[1602]: time="2026-01-20T00:50:34.781605903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:50:34.846673 sshd[6361]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:34.867378 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:33864.service: Deactivated successfully. Jan 20 00:50:34.887914 containerd[1602]: time="2026-01-20T00:50:34.887664956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:34.889407 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:50:34.892913 containerd[1602]: time="2026-01-20T00:50:34.892680538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:50:34.892913 containerd[1602]: time="2026-01-20T00:50:34.892818705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:50:34.893177 kubelet[2786]: E0120 00:50:34.893127 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:50:34.896341 kubelet[2786]: E0120 00:50:34.893198 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:50:34.896341 kubelet[2786]: E0120 00:50:34.893373 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsnnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmzpv_calico-system(9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:34.896341 kubelet[2786]: E0120 00:50:34.895910 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:50:34.894127 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:50:34.903655 systemd-logind[1586]: Removed session 22. Jan 20 00:50:35.798543 containerd[1602]: time="2026-01-20T00:50:35.798190875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:50:35.804892 kubelet[2786]: E0120 00:50:35.804748 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:50:35.923018 containerd[1602]: time="2026-01-20T00:50:35.922912847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:35.940053 containerd[1602]: time="2026-01-20T00:50:35.939766922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:50:35.940053 containerd[1602]: time="2026-01-20T00:50:35.939890893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:50:35.940930 kubelet[2786]: E0120 00:50:35.940834 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:35.941620 kubelet[2786]: E0120 00:50:35.940939 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:35.941620 kubelet[2786]: E0120 00:50:35.941189 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-f4nqr_calico-apiserver(7e16703d-6774-4dbd-a448-684d9c6307e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:35.944090 kubelet[2786]: E0120 00:50:35.942606 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:50:38.782250 kubelet[2786]: E0120 00:50:38.780283 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:39.795259 containerd[1602]: time="2026-01-20T00:50:39.793122815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:50:39.876754 containerd[1602]: time="2026-01-20T00:50:39.874806758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:39.883937 containerd[1602]: time="2026-01-20T00:50:39.883731119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:50:39.883937 containerd[1602]: time="2026-01-20T00:50:39.883783064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:50:39.884667 kubelet[2786]: E0120 00:50:39.884548 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:39.884667 kubelet[2786]: E0120 00:50:39.884637 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:39.887611 kubelet[2786]: E0120 00:50:39.884819 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh9pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddd4777cd-jcj86_calico-apiserver(303ab104-f18e-4de9-832d-feef41e44244): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:39.887611 kubelet[2786]: E0120 00:50:39.886259 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:50:39.885679 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:33876.service - OpenSSH per-connection server daemon (10.0.0.1:33876). Jan 20 00:50:40.001761 sshd[6384]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:40.008150 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:40.033347 systemd-logind[1586]: New session 23 of user core. Jan 20 00:50:40.045745 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:50:40.393422 sshd[6384]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:40.403267 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:33876.service: Deactivated successfully. Jan 20 00:50:40.424925 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:50:40.443414 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:50:40.450697 systemd-logind[1586]: Removed session 23. Jan 20 00:50:40.782301 kubelet[2786]: E0120 00:50:40.781420 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:50:43.778277 kubelet[2786]: E0120 00:50:43.778151 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:50:44.797275 containerd[1602]: time="2026-01-20T00:50:44.792867200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:50:44.890746 containerd[1602]: time="2026-01-20T00:50:44.890658494Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:44.895537 containerd[1602]: time="2026-01-20T00:50:44.895128390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:50:44.895537 containerd[1602]: time="2026-01-20T00:50:44.895212475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:50:44.896404 kubelet[2786]: E0120 00:50:44.896275 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:50:44.896404 kubelet[2786]: E0120 00:50:44.896353 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:50:44.907663 kubelet[2786]: E0120 00:50:44.896554 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:44.914317 containerd[1602]: time="2026-01-20T00:50:44.910014936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:50:45.022882 containerd[1602]: time="2026-01-20T00:50:45.022756800Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:45.032543 containerd[1602]: time="2026-01-20T00:50:45.032470883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:50:45.032874 containerd[1602]: time="2026-01-20T00:50:45.032822689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:50:45.036157 kubelet[2786]: E0120 00:50:45.035479 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:50:45.036157 kubelet[2786]: E0120 00:50:45.035564 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:50:45.036157 kubelet[2786]: E0120 00:50:45.035733 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ffgch_calico-system(946b9e08-0972-42be-947f-c9b1fe484382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:45.037223 kubelet[2786]: E0120 00:50:45.036891 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:50:45.411033 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:47218.service - OpenSSH per-connection server daemon (10.0.0.1:47218). Jan 20 00:50:45.854571 sshd[6422]: Accepted publickey for core from 10.0.0.1 port 47218 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:45.860529 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:45.879382 systemd-logind[1586]: New session 24 of user core. Jan 20 00:50:45.907267 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:50:46.524604 sshd[6422]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:46.538518 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:50:46.540082 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:47218.service: Deactivated successfully. Jan 20 00:50:46.553997 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:50:46.563609 systemd-logind[1586]: Removed session 24. Jan 20 00:50:46.783076 kubelet[2786]: E0120 00:50:46.782379 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:50:46.789183 kubelet[2786]: E0120 00:50:46.789049 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:50:47.805024 kubelet[2786]: E0120 00:50:47.803284 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:50:51.554533 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:47228.service - OpenSSH per-connection server daemon (10.0.0.1:47228). Jan 20 00:50:51.657608 sshd[6441]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:51.660395 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:51.681420 systemd-logind[1586]: New session 25 of user core. Jan 20 00:50:51.704766 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:50:51.785412 containerd[1602]: time="2026-01-20T00:50:51.784264697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:50:51.928315 containerd[1602]: time="2026-01-20T00:50:51.927269591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:51.935549 containerd[1602]: time="2026-01-20T00:50:51.935361924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:50:51.939034 containerd[1602]: time="2026-01-20T00:50:51.935722737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:50:51.939157 kubelet[2786]: E0120 00:50:51.936053 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:51.939157 kubelet[2786]: E0120 00:50:51.936124 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:50:51.939157 kubelet[2786]: E0120 00:50:51.936344 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpsm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7748477466-xqhsk_calico-apiserver(7442950d-347c-4ccb-839f-bbcef74b512f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:51.939157 kubelet[2786]: E0120 00:50:51.937651 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:50:52.087369 sshd[6441]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:52.102071 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:47228.service: Deactivated successfully. Jan 20 00:50:52.114050 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:50:52.115531 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:50:52.119320 systemd-logind[1586]: Removed session 25. Jan 20 00:50:55.800767 kubelet[2786]: E0120 00:50:55.800706 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:50:57.112229 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:41348.service - OpenSSH per-connection server daemon (10.0.0.1:41348). Jan 20 00:50:57.205662 sshd[6459]: Accepted publickey for core from 10.0.0.1 port 41348 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:50:57.213628 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:50:57.239027 systemd-logind[1586]: New session 26 of user core. Jan 20 00:50:57.249831 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:50:57.567364 sshd[6459]: pam_unix(sshd:session): session closed for user core Jan 20 00:50:57.577364 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:50:57.578883 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:41348.service: Deactivated successfully. Jan 20 00:50:57.590405 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:50:57.592438 systemd-logind[1586]: Removed session 26. Jan 20 00:50:57.795067 kubelet[2786]: E0120 00:50:57.795016 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:50:57.800206 kubelet[2786]: E0120 00:50:57.795083 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:50:58.784012 kubelet[2786]: E0120 00:50:58.783861 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:50:59.793612 kubelet[2786]: E0120 00:50:59.792085 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:51:02.591775 systemd[1]: Started sshd@26-10.0.0.92:22-10.0.0.1:59634.service - OpenSSH per-connection server daemon (10.0.0.1:59634). Jan 20 00:51:02.695641 sshd[6499]: Accepted publickey for core from 10.0.0.1 port 59634 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:02.700903 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:02.724917 systemd-logind[1586]: New session 27 of user core. Jan 20 00:51:02.745155 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 00:51:02.778704 kubelet[2786]: E0120 00:51:02.778352 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:51:03.060654 sshd[6499]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:03.078250 systemd[1]: Started sshd@27-10.0.0.92:22-10.0.0.1:59642.service - OpenSSH per-connection server daemon (10.0.0.1:59642). Jan 20 00:51:03.079196 systemd[1]: sshd@26-10.0.0.92:22-10.0.0.1:59634.service: Deactivated successfully. Jan 20 00:51:03.100407 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 00:51:03.106308 systemd-logind[1586]: Session 27 logged out. Waiting for processes to exit. Jan 20 00:51:03.113552 systemd-logind[1586]: Removed session 27. Jan 20 00:51:03.169823 sshd[6513]: Accepted publickey for core from 10.0.0.1 port 59642 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:03.174329 sshd[6513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:03.188125 systemd-logind[1586]: New session 28 of user core. Jan 20 00:51:03.197362 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 00:51:04.036767 sshd[6513]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:04.056728 systemd[1]: Started sshd@28-10.0.0.92:22-10.0.0.1:59650.service - OpenSSH per-connection server daemon (10.0.0.1:59650). Jan 20 00:51:04.060812 systemd[1]: sshd@27-10.0.0.92:22-10.0.0.1:59642.service: Deactivated successfully. Jan 20 00:51:04.073567 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 00:51:04.079623 systemd-logind[1586]: Session 28 logged out. Waiting for processes to exit. Jan 20 00:51:04.082424 systemd-logind[1586]: Removed session 28. Jan 20 00:51:04.190102 sshd[6526]: Accepted publickey for core from 10.0.0.1 port 59650 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:04.193433 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:04.208026 systemd-logind[1586]: New session 29 of user core. Jan 20 00:51:04.223830 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 00:51:05.637022 sshd[6526]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:05.655200 systemd[1]: Started sshd@29-10.0.0.92:22-10.0.0.1:59658.service - OpenSSH per-connection server daemon (10.0.0.1:59658). Jan 20 00:51:05.672717 systemd[1]: sshd@28-10.0.0.92:22-10.0.0.1:59650.service: Deactivated successfully. Jan 20 00:51:05.689223 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 00:51:05.692088 systemd-logind[1586]: Session 29 logged out. Waiting for processes to exit. Jan 20 00:51:05.697879 systemd-logind[1586]: Removed session 29. Jan 20 00:51:06.007156 sshd[6554]: Accepted publickey for core from 10.0.0.1 port 59658 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:06.023884 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:06.046047 systemd-logind[1586]: New session 30 of user core. Jan 20 00:51:06.061659 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 00:51:06.833801 kubelet[2786]: E0120 00:51:06.833534 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:51:08.043207 sshd[6554]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:08.090831 systemd[1]: Started sshd@30-10.0.0.92:22-10.0.0.1:59672.service - OpenSSH per-connection server daemon (10.0.0.1:59672). Jan 20 00:51:08.092061 systemd[1]: sshd@29-10.0.0.92:22-10.0.0.1:59658.service: Deactivated successfully. Jan 20 00:51:08.134434 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 00:51:08.136407 systemd-logind[1586]: Session 30 logged out. Waiting for processes to exit. Jan 20 00:51:08.150782 systemd-logind[1586]: Removed session 30. Jan 20 00:51:08.235195 sshd[6569]: Accepted publickey for core from 10.0.0.1 port 59672 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:08.243922 sshd[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:08.264411 systemd-logind[1586]: New session 31 of user core. Jan 20 00:51:08.274601 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 00:51:09.164074 sshd[6569]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:09.186847 systemd[1]: sshd@30-10.0.0.92:22-10.0.0.1:59672.service: Deactivated successfully. Jan 20 00:51:09.199857 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 00:51:09.204313 systemd-logind[1586]: Session 31 logged out. Waiting for processes to exit. Jan 20 00:51:09.209932 systemd-logind[1586]: Removed session 31. Jan 20 00:51:10.883813 kubelet[2786]: E0120 00:51:10.883649 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:51:10.887172 kubelet[2786]: E0120 00:51:10.886398 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:51:10.887172 kubelet[2786]: E0120 00:51:10.886808 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:51:12.790027 kubelet[2786]: E0120 00:51:12.788386 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:51:12.821047 kubelet[2786]: E0120 00:51:12.814757 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:51:13.905158 kubelet[2786]: E0120 00:51:13.904918 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:14.190098 systemd[1]: Started sshd@31-10.0.0.92:22-10.0.0.1:58618.service - OpenSSH per-connection server daemon (10.0.0.1:58618). Jan 20 00:51:14.276165 sshd[6587]: Accepted publickey for core from 10.0.0.1 port 58618 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:14.283046 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:14.307332 systemd-logind[1586]: New session 32 of user core. Jan 20 00:51:14.322744 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 00:51:14.619554 sshd[6587]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:14.638183 systemd[1]: sshd@31-10.0.0.92:22-10.0.0.1:58618.service: Deactivated successfully. Jan 20 00:51:14.647724 systemd-logind[1586]: Session 32 logged out. Waiting for processes to exit. Jan 20 00:51:14.649725 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 00:51:14.651684 systemd-logind[1586]: Removed session 32. Jan 20 00:51:14.780599 kubelet[2786]: E0120 00:51:14.780185 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:51:19.672270 systemd[1]: Started sshd@32-10.0.0.92:22-10.0.0.1:58632.service - OpenSSH per-connection server daemon (10.0.0.1:58632). Jan 20 00:51:19.773419 sshd[6602]: Accepted publickey for core from 10.0.0.1 port 58632 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:19.781179 sshd[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:19.787881 kubelet[2786]: E0120 00:51:19.786116 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:51:19.815836 systemd-logind[1586]: New session 33 of user core. Jan 20 00:51:19.829701 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 00:51:20.270135 sshd[6602]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:20.284609 systemd-logind[1586]: Session 33 logged out. Waiting for processes to exit. Jan 20 00:51:20.290473 systemd[1]: sshd@32-10.0.0.92:22-10.0.0.1:58632.service: Deactivated successfully. Jan 20 00:51:20.303616 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 00:51:20.307108 systemd-logind[1586]: Removed session 33. Jan 20 00:51:34.344352 systemd[1]: Started sshd@33-10.0.0.92:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Jan 20 00:51:34.721436 systemd-journald[1172]: Under memory pressure, flushing caches. Jan 20 00:51:34.697331 systemd-resolved[1472]: Under memory pressure, flushing caches. Jan 20 00:51:34.697567 systemd-resolved[1472]: Flushed all caches. Jan 20 00:51:34.780078 sshd[6621]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:34.787571 sshd[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:34.828277 systemd-logind[1586]: New session 34 of user core. Jan 20 00:51:34.855183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611-rootfs.mount: Deactivated successfully. Jan 20 00:51:34.873057 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 00:51:34.935051 kubelet[2786]: E0120 00:51:34.934852 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:34.944495 kubelet[2786]: E0120 00:51:34.944434 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:51:34.947322 kubelet[2786]: E0120 00:51:34.944842 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:51:34.947745 kubelet[2786]: E0120 00:51:34.944925 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:51:34.948303 kubelet[2786]: E0120 00:51:34.945470 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:51:34.949709 kubelet[2786]: E0120 00:51:34.949460 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:34.951642 kubelet[2786]: E0120 00:51:34.951596 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:51:34.955716 kubelet[2786]: E0120 00:51:34.955501 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:51:34.955716 kubelet[2786]: E0120 00:51:34.955646 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:51:34.998301 containerd[1602]: time="2026-01-20T00:51:34.954427024Z" level=info msg="shim disconnected" id=5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611 namespace=k8s.io Jan 20 00:51:34.998301 containerd[1602]: time="2026-01-20T00:51:34.993643887Z" level=warning msg="cleaning up after shim disconnected" id=5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611 namespace=k8s.io Jan 20 00:51:34.998301 containerd[1602]: time="2026-01-20T00:51:34.993675534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:51:35.012442 containerd[1602]: time="2026-01-20T00:51:35.008486605Z" level=error msg="collecting metrics for 5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611" error="ttrpc: closed: unknown" Jan 20 00:51:35.303456 sshd[6621]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:35.320767 systemd[1]: sshd@33-10.0.0.92:22-10.0.0.1:57122.service: Deactivated successfully. Jan 20 00:51:35.339578 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 00:51:35.339619 systemd-logind[1586]: Session 34 logged out. Waiting for processes to exit. Jan 20 00:51:35.350311 systemd-logind[1586]: Removed session 34. Jan 20 00:51:35.778841 kubelet[2786]: E0120 00:51:35.778755 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:35.961198 kubelet[2786]: I0120 00:51:35.961024 2786 scope.go:117] "RemoveContainer" containerID="5ed46f0e05880a97fcccc6dc9a009f9ecdcadbe9bb0a66aed5ceb2d184a99611" Jan 20 00:51:35.963385 kubelet[2786]: E0120 00:51:35.961302 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:35.974931 containerd[1602]: time="2026-01-20T00:51:35.969815459Z" level=info msg="CreateContainer within sandbox \"ef3b922f2e05096b18aa870962f1a1791587f649a22deee76eb324ff4b6973a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 00:51:36.062904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578608025.mount: Deactivated successfully. Jan 20 00:51:36.083473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638396934.mount: Deactivated successfully. Jan 20 00:51:36.101448 containerd[1602]: time="2026-01-20T00:51:36.101292595Z" level=info msg="CreateContainer within sandbox \"ef3b922f2e05096b18aa870962f1a1791587f649a22deee76eb324ff4b6973a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3828b71ee32f3385cbbcd6e50f916bd23a95795530b09db2bc34d5433bf8419d\"" Jan 20 00:51:36.108011 containerd[1602]: time="2026-01-20T00:51:36.106075965Z" level=info msg="StartContainer for \"3828b71ee32f3385cbbcd6e50f916bd23a95795530b09db2bc34d5433bf8419d\"" Jan 20 00:51:36.383734 containerd[1602]: time="2026-01-20T00:51:36.380812928Z" level=info msg="StartContainer for \"3828b71ee32f3385cbbcd6e50f916bd23a95795530b09db2bc34d5433bf8419d\" returns successfully" Jan 20 00:51:36.982696 kubelet[2786]: E0120 00:51:36.981665 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:37.987673 kubelet[2786]: E0120 00:51:37.986820 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:39.000851 kubelet[2786]: E0120 00:51:39.000092 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:39.784410 kubelet[2786]: E0120 00:51:39.783477 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:40.329510 systemd[1]: Started sshd@34-10.0.0.92:22-10.0.0.1:57124.service - OpenSSH per-connection server daemon (10.0.0.1:57124). Jan 20 00:51:40.451379 sshd[6727]: Accepted publickey for core from 10.0.0.1 port 57124 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:40.455756 sshd[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:40.471871 systemd-logind[1586]: New session 35 of user core. Jan 20 00:51:40.482620 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 00:51:40.980382 sshd[6727]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:40.992902 systemd[1]: sshd@34-10.0.0.92:22-10.0.0.1:57124.service: Deactivated successfully. Jan 20 00:51:41.005241 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 00:51:41.005837 systemd-logind[1586]: Session 35 logged out. Waiting for processes to exit. Jan 20 00:51:41.016330 systemd-logind[1586]: Removed session 35. Jan 20 00:51:46.017166 systemd[1]: Started sshd@35-10.0.0.92:22-10.0.0.1:47998.service - OpenSSH per-connection server daemon (10.0.0.1:47998). Jan 20 00:51:46.117671 sshd[6742]: Accepted publickey for core from 10.0.0.1 port 47998 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:46.125037 sshd[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:46.144032 systemd-logind[1586]: New session 36 of user core. Jan 20 00:51:46.158602 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 00:51:46.521197 sshd[6742]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:46.536289 systemd[1]: sshd@35-10.0.0.92:22-10.0.0.1:47998.service: Deactivated successfully. Jan 20 00:51:46.554761 systemd-logind[1586]: Session 36 logged out. Waiting for processes to exit. Jan 20 00:51:46.557533 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 00:51:46.568455 systemd-logind[1586]: Removed session 36. Jan 20 00:51:47.796178 kubelet[2786]: E0120 00:51:47.795351 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7748477466-xqhsk" podUID="7442950d-347c-4ccb-839f-bbcef74b512f" Jan 20 00:51:47.872094 kubelet[2786]: E0120 00:51:47.871204 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ffgch" podUID="946b9e08-0972-42be-947f-c9b1fe484382" Jan 20 00:51:47.980715 kubelet[2786]: E0120 00:51:47.979694 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:48.136341 kubelet[2786]: E0120 00:51:48.133850 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:48.796098 kubelet[2786]: E0120 00:51:48.795518 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-f4nqr" podUID="7e16703d-6774-4dbd-a448-684d9c6307e4" Jan 20 00:51:48.796098 kubelet[2786]: E0120 00:51:48.795530 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55cdf5b57-92x4l" podUID="c6f27543-10cf-4ae1-9e7a-a66dba01cb01" Jan 20 00:51:49.781735 kubelet[2786]: E0120 00:51:49.779393 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:49.783615 kubelet[2786]: E0120 00:51:49.783529 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddd4777cd-jcj86" podUID="303ab104-f18e-4de9-832d-feef41e44244" Jan 20 00:51:49.791167 containerd[1602]: time="2026-01-20T00:51:49.787526250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:51:49.793079 kubelet[2786]: E0120 00:51:49.790793 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmzpv" podUID="9f91c8a7-f2ad-4d3b-acad-ec065bbf8a4a" Jan 20 00:51:49.914027 containerd[1602]: time="2026-01-20T00:51:49.910391265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:51:49.926029 containerd[1602]: time="2026-01-20T00:51:49.918854570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:51:49.926029 containerd[1602]: time="2026-01-20T00:51:49.918944089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:51:49.926263 kubelet[2786]: E0120 00:51:49.919258 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:51:49.926263 kubelet[2786]: E0120 00:51:49.919315 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:51:49.926263 kubelet[2786]: E0120 00:51:49.919481 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bd15b3b8928842729e5a367f173cdad6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:51:49.932339 containerd[1602]: time="2026-01-20T00:51:49.927035807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:51:50.042350 containerd[1602]: time="2026-01-20T00:51:50.041884338Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:51:50.050932 containerd[1602]: time="2026-01-20T00:51:50.050713249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:51:50.050932 containerd[1602]: time="2026-01-20T00:51:50.050831318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:51:50.051272 kubelet[2786]: E0120 00:51:50.051180 2786 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:51:50.051356 kubelet[2786]: E0120 00:51:50.051267 2786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:51:50.051577 kubelet[2786]: E0120 00:51:50.051431 2786 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b7b664c8f-84jkd_calico-system(06b8bae0-3466-476f-9e43-40816e9ed87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:51:50.053984 kubelet[2786]: E0120 00:51:50.053890 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b7b664c8f-84jkd" podUID="06b8bae0-3466-476f-9e43-40816e9ed87d" Jan 20 00:51:51.562075 systemd[1]: Started sshd@36-10.0.0.92:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Jan 20 00:51:51.745741 sshd[6765]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:51:51.750363 sshd[6765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:51:51.770052 systemd-logind[1586]: New session 37 of user core. Jan 20 00:51:51.783206 kubelet[2786]: E0120 00:51:51.782175 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:51.786121 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 00:51:52.207907 sshd[6765]: pam_unix(sshd:session): session closed for user core Jan 20 00:51:52.226303 systemd[1]: sshd@36-10.0.0.92:22-10.0.0.1:48014.service: Deactivated successfully. Jan 20 00:51:52.241378 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 00:51:52.248591 systemd-logind[1586]: Session 37 logged out. Waiting for processes to exit. Jan 20 00:51:52.252300 systemd-logind[1586]: Removed session 37. Jan 20 00:51:52.781842 kubelet[2786]: E0120 00:51:52.778165 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"