Jan 17 00:22:03.792069 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:22:03.792098 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:22:03.792112 kernel: BIOS-provided physical RAM map: Jan 17 00:22:03.792121 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:22:03.792129 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:22:03.792137 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:22:03.792147 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:22:03.792155 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:22:03.792163 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:22:03.792171 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:22:03.792183 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:22:03.792191 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:22:03.792199 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:22:03.792208 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:22:03.792218 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:22:03.792227 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:22:03.792239 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:22:03.792248 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:22:03.792256 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:22:03.792788 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:22:03.792805 kernel: NX (Execute Disable) protection: active Jan 17 00:22:03.792814 kernel: APIC: Static calls initialized Jan 17 00:22:03.792823 kernel: efi: EFI v2.7 by EDK II Jan 17 00:22:03.792832 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:22:03.792842 kernel: SMBIOS 2.8 present. Jan 17 00:22:03.792850 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:22:03.792859 kernel: Hypervisor detected: KVM Jan 17 00:22:03.792873 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:22:03.792883 kernel: kvm-clock: using sched offset of 9817290620 cycles Jan 17 00:22:03.793111 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:22:03.793122 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:22:03.793131 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:22:03.793141 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:22:03.793150 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:22:03.793159 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:22:03.793168 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:22:03.793182 kernel: Using GB pages for direct mapping Jan 17 00:22:03.793191 kernel: Secure boot disabled Jan 17 00:22:03.793200 kernel: ACPI: Early table checksum verification disabled Jan 17 00:22:03.793209 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:22:03.793223 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:22:03.793233 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:22:03.793242 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:22:03.793256 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:22:03.793389 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:22:03.793402 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:22:03.793411 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:22:03.793421 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:22:03.793430 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:22:03.793440 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:22:03.793454 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:22:03.793463 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:22:03.793473 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:22:03.793482 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:22:03.793492 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:22:03.793501 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:22:03.793511 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:22:03.793520 kernel: No NUMA configuration found Jan 17 00:22:03.793530 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:22:03.793543 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:22:03.793552 kernel: Zone ranges: Jan 17 00:22:03.793562 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:22:03.793571 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:22:03.793581 kernel: Normal empty Jan 17 00:22:03.793590 kernel: Movable zone start for each node Jan 17 00:22:03.793600 kernel: Early memory node ranges Jan 17 00:22:03.793609 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:22:03.793619 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:22:03.793628 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:22:03.793641 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:22:03.793650 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:22:03.793660 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:22:03.793669 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:22:03.793679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:22:03.793742 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:22:03.793752 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:22:03.793762 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:22:03.793771 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:22:03.793785 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:22:03.793795 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:22:03.793804 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:22:03.793814 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:22:03.793824 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:22:03.793833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:22:03.793843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:22:03.793853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:22:03.793862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:22:03.793875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:22:03.793922 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:22:03.793934 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:22:03.793943 kernel: TSC deadline timer available Jan 17 00:22:03.793953 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:22:03.793962 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:22:03.793972 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:22:03.793981 kernel: kvm-guest: setup PV sched yield Jan 17 00:22:03.793991 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:22:03.794004 kernel: Booting paravirtualized kernel on KVM Jan 17 00:22:03.794014 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:22:03.794024 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:22:03.794033 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:22:03.794043 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:22:03.794052 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:22:03.794062 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:22:03.794072 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:22:03.794082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:22:03.794095 kernel: random: crng init done Jan 17 00:22:03.794105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:22:03.794115 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:22:03.794124 kernel: Fallback order for Node 0: 0 Jan 17 00:22:03.794134 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:22:03.794144 kernel: Policy zone: DMA32 Jan 17 00:22:03.794153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:22:03.794164 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:22:03.794176 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:22:03.794186 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:22:03.794195 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:22:03.794205 kernel: Dynamic Preempt: voluntary Jan 17 00:22:03.794215 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:22:03.794235 kernel: rcu: RCU event tracing is enabled. Jan 17 00:22:03.794248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:22:03.794259 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:22:03.794461 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:22:03.794474 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:22:03.794484 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:22:03.794494 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:22:03.794509 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:22:03.794519 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:22:03.794529 kernel: Console: colour dummy device 80x25 Jan 17 00:22:03.794540 kernel: printk: console [ttyS0] enabled Jan 17 00:22:03.794549 kernel: ACPI: Core revision 20230628 Jan 17 00:22:03.794563 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:22:03.794573 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:22:03.794583 kernel: x2apic enabled Jan 17 00:22:03.794593 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:22:03.794604 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:22:03.794620 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:22:03.794631 kernel: kvm-guest: setup PV IPIs Jan 17 00:22:03.794641 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:22:03.794651 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:22:03.794665 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:22:03.794675 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:22:03.794738 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:22:03.794750 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:22:03.794760 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:22:03.794770 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:22:03.794781 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:22:03.794791 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:22:03.794801 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:22:03.794823 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:22:03.794834 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:22:03.794844 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:22:03.794854 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:22:03.794864 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:22:03.794874 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:22:03.794921 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:22:03.794936 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:22:03.794954 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:22:03.794967 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:22:03.794979 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:22:03.794990 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:22:03.795001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:22:03.795012 kernel: landlock: Up and running. Jan 17 00:22:03.795023 kernel: SELinux: Initializing. Jan 17 00:22:03.795033 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:22:03.795045 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:22:03.795060 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:22:03.795072 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:22:03.795084 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:22:03.795095 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:22:03.795105 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:22:03.795116 kernel: signal: max sigframe size: 1776 Jan 17 00:22:03.795127 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:22:03.795139 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:22:03.795150 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:22:03.795164 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:22:03.795175 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:22:03.795185 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:22:03.795196 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:22:03.795207 kernel: smpboot: Max logical packages: 1 Jan 17 00:22:03.795219 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:22:03.795230 kernel: devtmpfs: initialized Jan 17 00:22:03.795242 kernel: x86/mm: Memory block size: 128MB Jan 17 00:22:03.795254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:22:03.795358 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:22:03.795372 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:22:03.795383 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:22:03.795394 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:22:03.795405 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:22:03.795416 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:22:03.795427 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:22:03.795439 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:22:03.795450 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:22:03.795465 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:22:03.795476 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:22:03.795487 kernel: audit: type=2000 audit(1768609319.747:1): state=initialized audit_enabled=0 res=1 Jan 17 00:22:03.795497 kernel: cpuidle: using governor menu Jan 17 00:22:03.795508 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:22:03.795519 kernel: dca service started, version 1.12.1 Jan 17 00:22:03.795531 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:22:03.795542 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:22:03.795553 kernel: PCI: Using configuration type 1 for base access Jan 17 00:22:03.795568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:22:03.795579 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:22:03.795590 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:22:03.795601 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:22:03.795613 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:22:03.795624 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:22:03.795635 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:22:03.795646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:22:03.795657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:22:03.795672 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:22:03.796166 kernel: ACPI: Interpreter enabled Jan 17 00:22:03.796181 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:22:03.796193 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:22:03.796205 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:22:03.796216 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:22:03.796227 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:22:03.796238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:22:03.796787 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:22:03.798543 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:22:03.798799 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:22:03.798818 kernel: PCI host bridge to bus 0000:00 Jan 17 00:22:03.799167 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:22:03.799371 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:22:03.799533 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:22:03.800255 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:22:03.800427 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:22:03.800588 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:22:03.800852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:22:03.801107 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:22:03.801317 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:22:03.801526 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:22:03.801795 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:22:03.803207 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:22:03.803447 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:22:03.803643 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:22:03.803942 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 11718 usecs Jan 17 00:22:03.804161 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:22:03.804370 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:22:03.804570 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:22:03.804832 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:22:03.805083 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:22:03.805285 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:22:03.805484 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:22:03.805669 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:22:03.806606 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:22:03.807213 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:22:03.809757 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:22:03.809986 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:22:03.810176 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:22:03.810672 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:22:03.810948 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:22:03.811135 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:22:03.811412 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:22:03.811582 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:22:03.811840 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:22:03.812055 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:22:03.812072 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:22:03.812084 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:22:03.812101 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:22:03.812113 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:22:03.812125 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:22:03.812136 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:22:03.812147 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:22:03.812159 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:22:03.812170 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:22:03.812181 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:22:03.812192 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:22:03.812207 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:22:03.812218 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:22:03.812230 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:22:03.812241 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:22:03.812252 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:22:03.812264 kernel: iommu: Default domain type: Translated Jan 17 00:22:03.812541 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:22:03.812553 kernel: efivars: Registered efivars operations Jan 17 00:22:03.812564 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:22:03.812580 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:22:03.812592 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:22:03.812603 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:22:03.812614 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:22:03.812624 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:22:03.812942 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:22:03.813114 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:22:03.813278 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:22:03.813337 kernel: vgaarb: loaded Jan 17 00:22:03.813358 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:22:03.813371 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:22:03.813382 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:22:03.813392 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:22:03.813403 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:22:03.813413 kernel: pnp: PnP ACPI init Jan 17 00:22:03.813632 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:22:03.813649 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:22:03.813665 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:22:03.813766 kernel: NET: Registered PF_INET protocol family Jan 17 00:22:03.813781 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:22:03.813792 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:22:03.813804 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:22:03.813815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:22:03.813826 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:22:03.813837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:22:03.813848 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:22:03.813863 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:22:03.813874 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:22:03.813922 kernel: NET: Registered PF_XDP protocol family Jan 17 00:22:03.814127 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:22:03.814330 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:22:03.814506 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:22:03.814682 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:22:03.814964 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:22:03.815131 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:22:03.815289 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:22:03.815443 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:22:03.815459 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:22:03.815471 kernel: Initialise system trusted keyrings Jan 17 00:22:03.815482 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:22:03.815493 kernel: Key type asymmetric registered Jan 17 00:22:03.815505 kernel: Asymmetric key parser 'x509' registered Jan 17 00:22:03.815520 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:22:03.815531 kernel: io scheduler mq-deadline registered Jan 17 00:22:03.815542 kernel: io scheduler kyber registered Jan 17 00:22:03.815554 kernel: io scheduler bfq registered Jan 17 00:22:03.815564 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:22:03.815577 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:22:03.815589 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:22:03.815601 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:22:03.815612 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:22:03.815623 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:22:03.815638 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:22:03.815650 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:22:03.815661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:22:03.816020 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:22:03.816039 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 17 00:22:03.816212 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:22:03.816364 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:22:02 UTC (1768609322) Jan 17 00:22:03.816520 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:22:03.816535 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:22:03.816546 kernel: efifb: probing for efifb Jan 17 00:22:03.816557 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:22:03.816568 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:22:03.816579 kernel: efifb: scrolling: redraw Jan 17 00:22:03.816589 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:22:03.816600 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:22:03.816611 kernel: fb0: EFI VGA frame buffer device Jan 17 00:22:03.816626 kernel: pstore: Using crash dump compression: deflate Jan 17 00:22:03.816637 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:22:03.816648 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:22:03.816659 kernel: Segment Routing with IPv6 Jan 17 00:22:03.816669 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:22:03.816680 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:22:03.816753 kernel: Key type dns_resolver registered Jan 17 00:22:03.816786 kernel: IPI shorthand broadcast: enabled Jan 17 00:22:03.816800 kernel: sched_clock: Marking stable (2783020208, 706758792)->(4200206842, -710427842) Jan 17 00:22:03.816815 kernel: registered taskstats version 1 Jan 17 00:22:03.816827 kernel: Loading compiled-in X.509 certificates Jan 17 00:22:03.816838 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:22:03.816850 kernel: Key type .fscrypt registered Jan 17 00:22:03.816861 kernel: Key type fscrypt-provisioning registered Jan 17 00:22:03.816872 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:22:03.816884 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:22:03.816932 kernel: ima: No architecture policies found Jan 17 00:22:03.816944 kernel: clk: Disabling unused clocks Jan 17 00:22:03.816958 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:22:03.816970 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:22:03.816981 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:22:03.816993 kernel: Run /init as init process Jan 17 00:22:03.817005 kernel: with arguments: Jan 17 00:22:03.817016 kernel: /init Jan 17 00:22:03.817027 kernel: with environment: Jan 17 00:22:03.817038 kernel: HOME=/ Jan 17 00:22:03.817049 kernel: TERM=linux Jan 17 00:22:03.817066 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:22:03.817080 systemd[1]: Detected virtualization kvm. Jan 17 00:22:03.817092 systemd[1]: Detected architecture x86-64. Jan 17 00:22:03.817104 systemd[1]: Running in initrd. Jan 17 00:22:03.817115 systemd[1]: No hostname configured, using default hostname. Jan 17 00:22:03.817127 systemd[1]: Hostname set to . Jan 17 00:22:03.817139 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:22:03.817154 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:22:03.817167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:22:03.817178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:22:03.817191 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:22:03.817204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:22:03.817222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:22:03.817234 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:22:03.817248 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:22:03.817260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:22:03.817272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:22:03.817284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:22:03.817296 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:22:03.817311 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:22:03.817323 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:22:03.817335 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:22:03.817347 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:22:03.817359 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:22:03.817371 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:22:03.817383 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:22:03.817395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:22:03.817410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:22:03.817422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:22:03.817434 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:22:03.817446 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:22:03.817458 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:22:03.817470 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:22:03.817482 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:22:03.817494 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:22:03.817506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:22:03.817548 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:22:03.817576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:03.817588 systemd-journald[194]: Journal started Jan 17 00:22:03.817616 systemd-journald[194]: Runtime Journal (/run/log/journal/4750b504659b41f2bbee8618129d35e1) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:22:03.839155 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:22:03.843516 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:22:03.855321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:22:03.877617 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:22:03.880584 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:22:03.891504 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:03.926087 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:22:03.954406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:22:03.980029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:22:04.011056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:22:04.015024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:22:04.064259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:04.075810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:22:04.117163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:22:04.191077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:22:04.196239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:22:04.211487 kernel: Bridge firewalling registered Jan 17 00:22:04.212197 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:22:04.234498 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:22:04.266408 dracut-cmdline[224]: dracut-dracut-053 Jan 17 00:22:04.266408 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:22:04.253290 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:04.345641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:04.385401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:22:04.461812 kernel: SCSI subsystem initialized Jan 17 00:22:04.462602 systemd-resolved[292]: Positive Trust Anchors: Jan 17 00:22:04.462644 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:22:04.462734 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:22:04.476209 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 17 00:22:04.480834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:22:04.551173 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:22:04.530537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:22:04.572481 kernel: iscsi: registered transport (tcp) Jan 17 00:22:04.622464 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:22:04.623025 kernel: QLogic iSCSI HBA Driver Jan 17 00:22:04.781383 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:22:04.815122 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:22:04.925754 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:22:04.925827 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:22:04.943309 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:22:05.089002 kernel: raid6: avx2x4 gen() 15087 MB/s Jan 17 00:22:05.108977 kernel: raid6: avx2x2 gen() 21884 MB/s Jan 17 00:22:05.132379 kernel: raid6: avx2x1 gen() 8317 MB/s Jan 17 00:22:05.132442 kernel: raid6: using algorithm avx2x2 gen() 21884 MB/s Jan 17 00:22:05.161358 kernel: raid6: .... xor() 14885 MB/s, rmw enabled Jan 17 00:22:05.161447 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:22:05.200997 kernel: xor: automatically using best checksumming function avx Jan 17 00:22:05.723468 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:22:05.770849 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:22:05.817088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:22:05.855634 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 17 00:22:05.865169 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:22:05.939137 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:22:06.043842 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Jan 17 00:22:06.204949 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:22:06.228030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:22:06.400834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:22:06.432315 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:22:06.477389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:22:06.504368 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:22:06.527984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:22:06.539491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:22:06.566611 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:22:06.581787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:22:06.582114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:06.587391 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:22:06.592035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:22:06.592944 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:06.615425 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:06.660001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:06.681540 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:22:06.698819 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:22:06.699153 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:22:06.714755 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:22:06.731219 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:22:06.731286 kernel: GPT:9289727 != 19775487 Jan 17 00:22:06.731302 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:22:06.731316 kernel: GPT:9289727 != 19775487 Jan 17 00:22:06.731329 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:22:06.731343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:22:06.776440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:06.810765 kernel: libata version 3.00 loaded. Jan 17 00:22:06.818249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:22:06.856370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:06.913748 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:22:06.917752 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:22:06.917806 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (466) Jan 17 00:22:06.917837 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:22:06.945072 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:22:06.945476 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:22:06.946298 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Jan 17 00:22:06.959140 kernel: AES CTR mode by8 optimization enabled Jan 17 00:22:06.964681 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:22:06.999264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:22:07.027148 kernel: scsi host0: ahci Jan 17 00:22:07.027537 kernel: scsi host1: ahci Jan 17 00:22:07.027796 kernel: scsi host2: ahci Jan 17 00:22:07.028042 kernel: scsi host3: ahci Jan 17 00:22:07.028274 kernel: scsi host4: ahci Jan 17 00:22:07.028630 kernel: scsi host5: ahci Jan 17 00:22:07.019013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:22:07.065111 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:22:07.065140 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:22:07.065156 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:22:07.065171 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:22:07.065186 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:22:07.065201 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:22:07.063859 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:22:07.093024 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:22:07.128177 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:22:07.154272 disk-uuid[563]: Primary Header is updated. Jan 17 00:22:07.154272 disk-uuid[563]: Secondary Entries is updated. Jan 17 00:22:07.154272 disk-uuid[563]: Secondary Header is updated. Jan 17 00:22:07.184849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:22:07.201113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:22:07.365161 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:22:07.392427 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:22:07.400840 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:22:07.407179 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:22:07.407242 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:22:07.410997 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:22:07.422011 kernel: ata3.00: applying bridge limits Jan 17 00:22:07.423428 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:22:07.428682 kernel: ata3.00: configured for UDMA/100 Jan 17 00:22:07.436777 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:22:07.562967 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:22:07.563287 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:22:07.594544 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:22:08.243942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:22:08.247892 disk-uuid[568]: The operation has completed successfully. Jan 17 00:22:08.349461 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:22:08.357814 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:22:08.418350 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:22:08.441859 sh[600]: Success Jan 17 00:22:08.501510 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:22:08.600983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:22:08.638040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:22:08.649649 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:22:08.695411 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:22:08.695479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:08.695498 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:22:08.699843 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:22:08.703132 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:22:08.739631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:22:08.748330 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:22:08.777868 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:22:08.792006 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:22:08.835763 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:08.835855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:08.835877 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:22:08.852819 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:22:08.879831 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:22:08.887478 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:08.920275 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:22:08.941313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:22:09.138500 ignition[704]: Ignition 2.19.0 Jan 17 00:22:09.138533 ignition[704]: Stage: fetch-offline Jan 17 00:22:09.138579 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:09.138592 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:22:09.138791 ignition[704]: parsed url from cmdline: "" Jan 17 00:22:09.138797 ignition[704]: no config URL provided Jan 17 00:22:09.138805 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:22:09.138819 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:22:09.138864 ignition[704]: op(1): [started] loading QEMU firmware config module Jan 17 00:22:09.138873 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:22:09.180736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:22:09.182593 ignition[704]: op(1): [finished] loading QEMU firmware config module Jan 17 00:22:09.207293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:22:09.288622 systemd-networkd[789]: lo: Link UP Jan 17 00:22:09.289992 systemd-networkd[789]: lo: Gained carrier Jan 17 00:22:09.294238 systemd-networkd[789]: Enumeration completed Jan 17 00:22:09.295430 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:09.295435 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:22:09.305485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:22:09.313250 systemd-networkd[789]: eth0: Link UP Jan 17 00:22:09.313257 systemd-networkd[789]: eth0: Gained carrier Jan 17 00:22:09.313274 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:09.325006 systemd[1]: Reached target network.target - Network. Jan 17 00:22:09.387868 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:22:09.560363 ignition[704]: parsing config with SHA512: 6b2510e1c6b4a44f4297d9f832bb26ca0eefed5cdeaf35fce563b47651cc2b4370759140a791f03f173148c144680f7961e7d8ac34e0bc3a6460f2404810dc55 Jan 17 00:22:09.577570 unknown[704]: fetched base config from "system" Jan 17 00:22:09.577587 unknown[704]: fetched user config from "qemu" Jan 17 00:22:09.578584 ignition[704]: fetch-offline: fetch-offline passed Jan 17 00:22:09.578671 ignition[704]: Ignition finished successfully Jan 17 00:22:09.595787 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:22:09.608286 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:22:09.639443 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:22:09.676190 ignition[793]: Ignition 2.19.0 Jan 17 00:22:09.677849 ignition[793]: Stage: kargs Jan 17 00:22:09.678109 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:09.678126 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:22:09.679514 ignition[793]: kargs: kargs passed Jan 17 00:22:09.679572 ignition[793]: Ignition finished successfully Jan 17 00:22:09.693149 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:22:09.711051 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:22:09.739449 ignition[803]: Ignition 2.19.0 Jan 17 00:22:09.739482 ignition[803]: Stage: disks Jan 17 00:22:09.739743 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:09.739760 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:22:09.752393 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:22:09.743252 ignition[803]: disks: disks passed Jan 17 00:22:09.759327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:22:09.743316 ignition[803]: Ignition finished successfully Jan 17 00:22:09.766238 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:22:09.771139 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:22:09.779392 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:22:09.783235 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:22:09.802283 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:22:09.853485 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:22:09.865868 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:22:09.904988 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:22:10.264210 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:22:10.266815 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:22:10.274812 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:22:10.311038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:22:10.332979 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Jan 17 00:22:10.333865 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:22:10.418286 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:10.418317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:10.418332 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:22:10.418346 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:22:10.349267 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:22:10.349333 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:22:10.349374 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:22:10.432658 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:22:10.455347 systemd-networkd[789]: eth0: Gained IPv6LL Jan 17 00:22:10.487755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:22:10.543080 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:22:10.657874 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:22:10.696955 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:22:10.720491 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:22:10.742079 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:22:11.102489 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:22:11.129342 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:22:11.156458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:22:11.181448 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:22:11.197494 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:11.278959 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:22:11.343060 ignition[934]: INFO : Ignition 2.19.0 Jan 17 00:22:11.343060 ignition[934]: INFO : Stage: mount Jan 17 00:22:11.355403 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:11.355403 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:22:11.355403 ignition[934]: INFO : mount: mount passed Jan 17 00:22:11.355403 ignition[934]: INFO : Ignition finished successfully Jan 17 00:22:11.354025 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:22:11.399755 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:22:11.440116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:22:11.474898 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (949) Jan 17 00:22:11.483783 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:11.492032 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:11.492083 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:22:11.512985 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:22:11.514747 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:22:11.577527 ignition[966]: INFO : Ignition 2.19.0 Jan 17 00:22:11.577527 ignition[966]: INFO : Stage: files Jan 17 00:22:11.589359 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:11.589359 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:22:11.589359 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:22:11.589359 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:22:11.589359 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:22:11.628223 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:22:11.628223 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:22:11.628223 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:22:11.628223 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:22:11.628223 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:22:11.628223 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:22:11.628223 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:22:11.598608 unknown[966]: wrote ssh authorized keys file for user: core Jan 17 00:22:11.688404 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:22:11.906596 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:22:12.031476 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:22:12.269543 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:22:13.690284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:22:13.690284 ignition[966]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 17 00:22:13.712355 ignition[966]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:22:13.814834 ignition[966]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:22:13.814834 ignition[966]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:22:13.814834 ignition[966]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:22:13.814834 ignition[966]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:22:13.814834 ignition[966]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:22:13.814834 ignition[966]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:22:13.814834 ignition[966]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:22:13.814834 ignition[966]: INFO : files: files passed Jan 17 00:22:13.814834 ignition[966]: INFO : Ignition finished successfully Jan 17 00:22:13.816210 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:22:13.879801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:22:13.896267 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:22:13.912198 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:22:13.912572 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:22:13.967164 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:22:13.988573 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:22:13.988573 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:22:14.007354 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:22:14.028672 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:22:14.042193 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:22:14.065565 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:22:14.142755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:22:14.142993 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:22:14.159055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:22:14.179748 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:22:14.189780 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:22:14.198580 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:22:14.266652 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:22:14.303157 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:22:14.348117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:22:14.366062 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:22:14.383469 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:22:14.391960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:22:14.396022 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:22:14.405319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:22:14.406330 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:22:14.422796 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:22:14.423166 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:22:14.444606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:22:14.449107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:22:14.461242 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:22:14.465984 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:22:14.483192 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:22:14.492630 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:22:14.512220 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:22:14.513072 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:22:14.532647 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:22:14.545263 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:22:14.545464 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:22:14.557108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:22:14.570849 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:22:14.571141 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:22:14.587028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:22:14.587243 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:22:14.592342 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:22:14.608257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:22:14.612394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:22:14.621292 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:22:14.643953 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:22:14.657232 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:22:14.657446 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:22:14.663196 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:22:14.663379 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:22:14.668268 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:22:14.668509 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:22:14.674296 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:22:14.674539 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:22:14.697180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:22:14.732801 ignition[1020]: INFO : Ignition 2.19.0 Jan 17 00:22:14.732801 ignition[1020]: INFO : Stage: umount Jan 17 00:22:14.732801 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:14.732801 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:22:14.732801 ignition[1020]: INFO : umount: umount passed Jan 17 00:22:14.732801 ignition[1020]: INFO : Ignition finished successfully Jan 17 00:22:14.699484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:22:14.699659 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:22:14.707256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:22:14.729066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:22:14.733179 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:22:14.787030 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:22:14.794649 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:22:14.823430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:22:14.828151 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:22:14.828330 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:22:14.843734 systemd[1]: Stopped target network.target - Network. Jan 17 00:22:14.858429 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:22:14.858844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:22:14.877887 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:22:14.878080 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:22:14.892178 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:22:14.892304 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:22:14.906155 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:22:14.913072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:22:14.933303 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:22:14.949329 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:22:14.964812 systemd-networkd[789]: eth0: DHCPv6 lease lost Jan 17 00:22:14.972427 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:22:14.976454 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:22:14.990120 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:22:14.998434 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:22:15.004894 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:22:15.005608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:22:15.029865 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:22:15.031371 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:22:15.045204 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:22:15.045311 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:22:15.054137 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:22:15.054218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:22:15.079995 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:22:15.081642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:22:15.081807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:22:15.092167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:22:15.092236 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:15.098530 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:22:15.098610 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:22:15.106040 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:22:15.106117 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:22:15.120258 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:22:15.164312 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:22:15.165457 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:22:15.180383 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:22:15.180563 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:22:15.192666 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:22:15.192828 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:22:15.210266 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:22:15.210410 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:22:15.218214 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:22:15.218309 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:22:15.226312 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:22:15.226388 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:22:15.234394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:22:15.234462 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:15.266081 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:22:15.278264 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:22:15.278368 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:22:15.292441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:22:15.292537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:15.306251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:22:15.306952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:22:15.319959 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:22:15.347295 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:22:15.381148 systemd[1]: Switching root. Jan 17 00:22:15.437534 systemd-journald[194]: Journal stopped Jan 17 00:22:17.907488 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:22:17.907573 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:22:17.907593 kernel: SELinux: policy capability open_perms=1 Jan 17 00:22:17.907615 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:22:17.907630 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:22:17.907645 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:22:17.907660 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:22:17.907675 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:22:17.907738 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:22:17.907755 kernel: audit: type=1403 audit(1768609335.907:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:22:17.907775 systemd[1]: Successfully loaded SELinux policy in 101.863ms. Jan 17 00:22:17.907802 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.502ms. Jan 17 00:22:17.907820 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:22:17.907835 systemd[1]: Detected virtualization kvm. Jan 17 00:22:17.907852 systemd[1]: Detected architecture x86-64. Jan 17 00:22:17.907867 systemd[1]: Detected first boot. Jan 17 00:22:17.907883 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:22:17.907902 zram_generator::config[1085]: No configuration found. Jan 17 00:22:17.907991 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:22:17.908009 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:22:17.908025 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:22:17.908042 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:22:17.908058 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:22:17.908076 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:22:17.908098 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:22:17.908118 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:22:17.908134 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:22:17.908151 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:22:17.908166 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:22:17.908182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:22:17.908204 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:22:17.908220 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:22:17.908236 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:22:17.908252 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:22:17.908272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:22:17.908288 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:22:17.908304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:22:17.908319 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:22:17.908335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:22:17.908352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:22:17.908368 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:22:17.908383 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:22:17.908402 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:22:17.908418 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:22:17.908436 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:22:17.908452 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:22:17.908468 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:22:17.908484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:22:17.908499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:22:17.908515 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:22:17.908530 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:22:17.908549 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:22:17.908565 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:22:17.908581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:17.908596 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:22:17.908612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:22:17.908628 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:22:17.908643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:22:17.908659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:17.908675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:22:17.908742 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:22:17.908760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:17.908794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:22:17.908811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:17.908827 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:22:17.908844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:17.908861 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:22:17.908898 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:22:17.908957 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:22:17.908975 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:22:17.908990 kernel: fuse: init (API version 7.39) Jan 17 00:22:17.909006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:22:17.909021 kernel: loop: module loaded Jan 17 00:22:17.909036 kernel: ACPI: bus type drm_connector registered Jan 17 00:22:17.909052 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:22:17.909091 systemd-journald[1181]: Collecting audit messages is disabled. Jan 17 00:22:17.909122 systemd-journald[1181]: Journal started Jan 17 00:22:17.909150 systemd-journald[1181]: Runtime Journal (/run/log/journal/4750b504659b41f2bbee8618129d35e1) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:22:17.921508 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:22:17.930749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:22:17.940789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:17.952292 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:22:17.957093 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:22:17.961624 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:22:17.966992 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:22:17.975782 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:22:17.981290 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:22:17.987065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:22:17.991964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:22:18.000648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:22:18.016311 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:22:18.017538 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:22:18.027053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:18.027800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:18.033871 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:22:18.034207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:22:18.038418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:18.038748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:18.043049 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:22:18.043367 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:22:18.048090 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:18.048353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:18.055089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:22:18.064025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:22:18.078165 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:22:18.102461 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:22:18.118282 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:22:18.129110 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:22:18.135428 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:22:18.139500 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:22:18.151088 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:22:18.157097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:22:18.160375 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:22:18.165604 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:22:18.172250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:18.179204 systemd-journald[1181]: Time spent on flushing to /var/log/journal/4750b504659b41f2bbee8618129d35e1 is 24.938ms for 969 entries. Jan 17 00:22:18.179204 systemd-journald[1181]: System Journal (/var/log/journal/4750b504659b41f2bbee8618129d35e1) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:22:18.233237 systemd-journald[1181]: Received client request to flush runtime journal. Jan 17 00:22:18.179996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:22:18.195107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:22:18.201618 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:22:18.207413 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:22:18.213101 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:22:18.222901 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:22:18.240836 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:22:18.247779 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:22:18.260106 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jan 17 00:22:18.260144 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jan 17 00:22:18.262150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:18.268422 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:22:18.295140 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:22:18.301489 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:22:18.352679 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:22:18.365263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:22:18.397027 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 17 00:22:18.397080 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 17 00:22:18.405980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:22:18.876191 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:22:18.900673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:22:18.936837 systemd-udevd[1246]: Using default interface naming scheme 'v255'. Jan 17 00:22:18.974897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:22:19.000896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:22:19.023057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:22:19.046491 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:22:19.076750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1258) Jan 17 00:22:19.141775 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:22:19.175781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:22:19.187778 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:22:19.214810 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:22:19.222289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:22:19.241800 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:22:19.245790 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:22:19.253281 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:22:19.253603 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:22:19.257804 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:22:19.259183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:19.267076 systemd-networkd[1265]: lo: Link UP Jan 17 00:22:19.267494 systemd-networkd[1265]: lo: Gained carrier Jan 17 00:22:19.270108 systemd-networkd[1265]: Enumeration completed Jan 17 00:22:19.270380 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:22:19.273748 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:19.273755 systemd-networkd[1265]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:22:19.276322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:22:19.276828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:19.277100 systemd-networkd[1265]: eth0: Link UP Jan 17 00:22:19.277108 systemd-networkd[1265]: eth0: Gained carrier Jan 17 00:22:19.277125 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:19.294851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:22:19.295792 systemd-networkd[1265]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:22:19.321895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:19.438871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:19.494218 kernel: kvm_amd: TSC scaling supported Jan 17 00:22:19.494295 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:22:19.494309 kernel: kvm_amd: Nested Paging enabled Jan 17 00:22:19.497883 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:22:19.497975 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:22:19.598849 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:22:19.629836 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:22:19.656320 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:22:19.677070 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:22:19.714054 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:22:19.721187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:22:19.742083 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:22:19.754258 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:22:19.796030 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:22:19.809488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:22:19.818626 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:22:19.818755 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:22:19.832337 systemd[1]: Reached target machines.target - Containers. Jan 17 00:22:19.837326 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:22:19.863072 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:22:19.876675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:22:19.883370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:19.884970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:22:19.897252 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:22:19.909472 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:22:19.913540 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:22:19.930171 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:22:19.956772 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:22:19.989828 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:22:19.997540 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:22:20.062583 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:22:20.128811 kernel: loop1: detected capacity change from 0 to 224512 Jan 17 00:22:20.204763 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:22:20.328764 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:22:20.385814 kernel: loop4: detected capacity change from 0 to 224512 Jan 17 00:22:20.423836 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:22:20.461970 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:22:20.462807 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 17 00:22:20.468634 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:22:20.468683 systemd[1]: Reloading... Jan 17 00:22:20.561804 zram_generator::config[1349]: No configuration found. Jan 17 00:22:20.681055 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:22:20.738513 systemd-networkd[1265]: eth0: Gained IPv6LL Jan 17 00:22:20.759161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:20.876493 systemd[1]: Reloading finished in 406 ms. Jan 17 00:22:20.907608 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:22:20.922867 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:22:20.932392 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:22:20.970128 systemd[1]: Starting ensure-sysext.service... Jan 17 00:22:20.975480 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:22:20.984230 systemd[1]: Reloading requested from client PID 1395 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:22:20.984285 systemd[1]: Reloading... Jan 17 00:22:21.278660 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:22:21.279465 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:22:21.281307 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:22:21.281892 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Jan 17 00:22:21.282090 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Jan 17 00:22:21.287625 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:22:21.287672 systemd-tmpfiles[1396]: Skipping /boot Jan 17 00:22:21.300792 zram_generator::config[1426]: No configuration found. Jan 17 00:22:21.684807 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:22:21.684848 systemd-tmpfiles[1396]: Skipping /boot Jan 17 00:22:21.816454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:21.920879 systemd[1]: Reloading finished in 935 ms. Jan 17 00:22:21.986446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:22:22.023049 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:22:22.037147 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:22:22.058873 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:22:22.082204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:22:22.091237 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:22:22.110678 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:22.111046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:22.116211 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:22.131132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:22.151880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:22.155500 augenrules[1493]: No rules Jan 17 00:22:22.159084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:22.161769 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:22.165994 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:22:22.179770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:22.180149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:22.187138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:22.187441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:22.197085 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:22:22.204304 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:22.204908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:22.234385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:22.236616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:22.257107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:22.268113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:22.282215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:22.290389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:22.295514 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:22:22.302203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:22.307221 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:22:22.322472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:22.322915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:22.338999 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:22:22.348896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:22.349245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:22.360586 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:22.361025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:22.366176 systemd-resolved[1480]: Positive Trust Anchors: Jan 17 00:22:22.366193 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:22:22.366220 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:22:22.369597 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:22:22.394987 systemd-resolved[1480]: Defaulting to hostname 'linux'. Jan 17 00:22:22.403169 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:22:22.434536 systemd[1]: Reached target network.target - Network. Jan 17 00:22:22.441858 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:22:22.448248 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:22:22.454981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:22.455365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:22.475543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:22.483347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:22:22.496625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:22.503485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:22.509617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:22.509924 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:22:22.510096 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:22.511645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:22.512010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:22.517577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:22:22.518013 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:22:22.523324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:22.523599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:22.532368 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:22.532758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:22.549770 systemd[1]: Finished ensure-sysext.service. Jan 17 00:22:22.568045 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:22:22.568285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:22:22.589006 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:22:22.809011 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:22:23.891084 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:22:23.891170 systemd-timesyncd[1541]: Initial clock synchronization to Sat 2026-01-17 00:22:23.887123 UTC. Jan 17 00:22:23.893875 systemd-resolved[1480]: Clock change detected. Flushing caches. Jan 17 00:22:23.901647 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:22:23.908878 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:22:23.918046 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:22:23.928415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:22:23.936309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:22:23.936395 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:22:23.942892 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:22:23.959077 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:22:23.964770 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:22:23.972171 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:22:23.977835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:22:23.984626 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:22:23.994073 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:22:24.001875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:22:24.006412 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:22:24.009952 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:22:24.013733 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:22:24.013974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:22:24.014959 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:22:24.021700 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:22:24.029017 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:22:24.036986 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:22:24.059446 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:22:24.067152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:22:24.071584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:22:24.074542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:24.081663 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:22:24.085399 jq[1550]: false Jan 17 00:22:24.092091 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:22:24.099214 extend-filesystems[1551]: Found loop3 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found loop4 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found loop5 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found sr0 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda1 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda2 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda3 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found usr Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda4 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda6 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda7 Jan 17 00:22:24.105664 extend-filesystems[1551]: Found vda9 Jan 17 00:22:24.105664 extend-filesystems[1551]: Checking size of /dev/vda9 Jan 17 00:22:24.189392 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:22:24.105417 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:22:24.121800 dbus-daemon[1548]: [system] SELinux support is enabled Jan 17 00:22:24.191138 extend-filesystems[1551]: Resized partition /dev/vda9 Jan 17 00:22:24.138605 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:22:24.199021 extend-filesystems[1569]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:22:24.178964 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:22:24.201020 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:22:24.212778 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:22:24.221458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1583) Jan 17 00:22:24.227621 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:22:24.240385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:22:24.257398 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:22:24.264339 jq[1593]: true Jan 17 00:22:24.280732 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:22:24.276368 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:22:24.277177 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:22:24.309791 update_engine[1588]: I20260117 00:22:24.287673 1588 main.cc:92] Flatcar Update Engine starting Jan 17 00:22:24.309791 update_engine[1588]: I20260117 00:22:24.308076 1588 update_check_scheduler.cc:74] Next update check in 3m31s Jan 17 00:22:24.285040 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:22:24.286718 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:22:24.297071 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:22:24.309137 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:22:24.309903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:22:24.318438 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:22:24.318438 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:22:24.318438 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:22:24.343214 jq[1602]: true Jan 17 00:22:24.345178 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Jan 17 00:22:24.360785 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:22:24.361166 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:22:24.371092 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:22:24.380005 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:22:24.390123 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:22:24.413953 tar[1600]: linux-amd64/LICENSE Jan 17 00:22:24.413953 tar[1600]: linux-amd64/helm Jan 17 00:22:24.418403 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:22:24.425103 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:22:24.430734 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:22:24.430848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:22:24.430882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:22:24.436320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:22:24.436343 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:22:24.442640 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:22:24.447870 systemd-logind[1580]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:22:24.448101 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:22:24.458907 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:22:24.459660 systemd-logind[1580]: New seat seat0. Jan 17 00:22:24.478095 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:22:24.514709 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:22:24.537816 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:22:24.563308 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:22:24.565994 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:22:24.575749 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:22:24.576666 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:22:24.577055 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:22:24.596099 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:22:24.602057 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:22:24.617001 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:22:24.633829 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:22:24.657334 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:22:24.667540 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:22:24.729160 containerd[1603]: time="2026-01-17T00:22:24.728978314Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:22:24.763390 containerd[1603]: time="2026-01-17T00:22:24.763331184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.767063 containerd[1603]: time="2026-01-17T00:22:24.766971090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:24.767063 containerd[1603]: time="2026-01-17T00:22:24.767005635Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:22:24.767063 containerd[1603]: time="2026-01-17T00:22:24.767024210Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:22:24.767373 containerd[1603]: time="2026-01-17T00:22:24.767307028Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:22:24.767373 containerd[1603]: time="2026-01-17T00:22:24.767330672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.767680 containerd[1603]: time="2026-01-17T00:22:24.767621966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:24.767680 containerd[1603]: time="2026-01-17T00:22:24.767663463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.767991 containerd[1603]: time="2026-01-17T00:22:24.767947514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768029 containerd[1603]: time="2026-01-17T00:22:24.767988550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768029 containerd[1603]: time="2026-01-17T00:22:24.768006484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768029 containerd[1603]: time="2026-01-17T00:22:24.768018606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768161 containerd[1603]: time="2026-01-17T00:22:24.768125546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768556 containerd[1603]: time="2026-01-17T00:22:24.768523099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768755 containerd[1603]: time="2026-01-17T00:22:24.768712954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:24.768789 containerd[1603]: time="2026-01-17T00:22:24.768752336Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:22:24.768906 containerd[1603]: time="2026-01-17T00:22:24.768862322Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:22:24.768990 containerd[1603]: time="2026-01-17T00:22:24.768957089Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:22:24.775515 containerd[1603]: time="2026-01-17T00:22:24.775079957Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:22:24.775515 containerd[1603]: time="2026-01-17T00:22:24.775127476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:22:24.775515 containerd[1603]: time="2026-01-17T00:22:24.775146241Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:22:24.775515 containerd[1603]: time="2026-01-17T00:22:24.775165397Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:22:24.775515 containerd[1603]: time="2026-01-17T00:22:24.775182850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:22:24.775515 containerd[1603]: time="2026-01-17T00:22:24.775412278Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.775984937Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776134446Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776152931Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776168680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776185592Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776207373Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776285418Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776306978Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776325483Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776341984Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776361240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776377059Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776400884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.776738 containerd[1603]: time="2026-01-17T00:22:24.776418046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776433966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776455887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776511390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776530346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776545013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776560452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776576372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776596770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776611808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776627808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776643658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776663064Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776686337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776699932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777115 containerd[1603]: time="2026-01-17T00:22:24.776712616Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776768380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776788157Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776890418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776907650Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776920504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776936264Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776955450Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:22:24.777693 containerd[1603]: time="2026-01-17T00:22:24.776968033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:22:24.777905 containerd[1603]: time="2026-01-17T00:22:24.777320852Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:22:24.777905 containerd[1603]: time="2026-01-17T00:22:24.777392806Z" level=info msg="Connect containerd service" Jan 17 00:22:24.777905 containerd[1603]: time="2026-01-17T00:22:24.777436147Z" level=info msg="using legacy CRI server" Jan 17 00:22:24.777905 containerd[1603]: time="2026-01-17T00:22:24.777444323Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:22:24.777905 containerd[1603]: time="2026-01-17T00:22:24.777583382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778346818Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778575595Z" level=info msg="Start subscribing containerd event" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778643692Z" level=info msg="Start recovering state" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778712410Z" level=info msg="Start event monitor" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778723992Z" level=info msg="Start snapshots syncer" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778734872Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:22:24.778922 containerd[1603]: time="2026-01-17T00:22:24.778744440Z" level=info msg="Start streaming server" Jan 17 00:22:24.779921 containerd[1603]: time="2026-01-17T00:22:24.779363777Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:22:24.779921 containerd[1603]: time="2026-01-17T00:22:24.779619985Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:22:24.780425 containerd[1603]: time="2026-01-17T00:22:24.780396310Z" level=info msg="containerd successfully booted in 0.053649s" Jan 17 00:22:24.780703 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:22:25.183555 tar[1600]: linux-amd64/README.md Jan 17 00:22:25.222817 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:22:28.131967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:28.141021 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:22:28.163344 systemd[1]: Startup finished in 15.996s (kernel) + 11.273s (userspace) = 27.270s. Jan 17 00:22:28.180408 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:30.141674 kubelet[1694]: E0117 00:22:30.141443 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:30.149809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:30.151062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:31.089679 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:22:31.105119 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). Jan 17 00:22:31.194001 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:31.196847 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:31.230180 systemd-logind[1580]: New session 1 of user core. Jan 17 00:22:31.231350 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:22:31.247856 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:22:31.273805 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:22:31.291844 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:22:31.298092 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:22:31.473409 systemd[1714]: Queued start job for default target default.target. Jan 17 00:22:31.474963 systemd[1714]: Created slice app.slice - User Application Slice. Jan 17 00:22:31.475017 systemd[1714]: Reached target paths.target - Paths. Jan 17 00:22:31.475034 systemd[1714]: Reached target timers.target - Timers. Jan 17 00:22:31.498574 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:22:31.509031 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:22:31.509156 systemd[1714]: Reached target sockets.target - Sockets. Jan 17 00:22:31.509178 systemd[1714]: Reached target basic.target - Basic System. Jan 17 00:22:31.509315 systemd[1714]: Reached target default.target - Main User Target. Jan 17 00:22:31.509372 systemd[1714]: Startup finished in 195ms. Jan 17 00:22:31.510388 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:22:31.522003 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:22:31.588130 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:36672.service - OpenSSH per-connection server daemon (10.0.0.1:36672). Jan 17 00:22:31.644554 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:31.646709 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:31.657173 systemd-logind[1580]: New session 2 of user core. Jan 17 00:22:31.668713 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:22:31.736161 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:31.751783 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:36676.service - OpenSSH per-connection server daemon (10.0.0.1:36676). Jan 17 00:22:31.752711 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:36672.service: Deactivated successfully. Jan 17 00:22:31.757420 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:22:31.757598 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:22:31.760999 systemd-logind[1580]: Removed session 2. Jan 17 00:22:31.801713 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 36676 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:31.804743 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:31.816655 systemd-logind[1580]: New session 3 of user core. Jan 17 00:22:31.828845 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:22:31.897606 sshd[1731]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:31.917090 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:36682.service - OpenSSH per-connection server daemon (10.0.0.1:36682). Jan 17 00:22:31.920157 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:36676.service: Deactivated successfully. Jan 17 00:22:31.922389 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:22:31.924385 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:22:31.930207 systemd-logind[1580]: Removed session 3. Jan 17 00:22:31.972329 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 36682 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:31.974468 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:31.985885 systemd-logind[1580]: New session 4 of user core. Jan 17 00:22:31.998765 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:22:32.082010 sshd[1740]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:32.095663 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:36698.service - OpenSSH per-connection server daemon (10.0.0.1:36698). Jan 17 00:22:32.096441 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:36682.service: Deactivated successfully. Jan 17 00:22:32.099572 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:22:32.107907 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:22:32.117204 systemd-logind[1580]: Removed session 4. Jan 17 00:22:32.151477 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 36698 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:32.154176 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:32.167146 systemd-logind[1580]: New session 5 of user core. Jan 17 00:22:32.180767 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:22:32.265866 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:22:32.266423 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:32.295086 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:32.300743 sshd[1747]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:32.316043 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:36714.service - OpenSSH per-connection server daemon (10.0.0.1:36714). Jan 17 00:22:32.319157 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:36698.service: Deactivated successfully. Jan 17 00:22:32.325777 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:22:32.326868 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:22:32.329339 systemd-logind[1580]: Removed session 5. Jan 17 00:22:32.367173 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 36714 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:32.368590 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:32.377858 systemd-logind[1580]: New session 6 of user core. Jan 17 00:22:32.392354 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:22:32.465719 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:22:32.466189 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:32.474684 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:32.487544 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:22:32.488010 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:32.528040 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:22:32.535113 auditctl[1767]: No rules Jan 17 00:22:32.536378 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:22:32.536899 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:22:32.542087 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:22:32.601038 augenrules[1786]: No rules Jan 17 00:22:32.604148 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:22:32.610770 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:32.615763 sshd[1756]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:32.629624 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Jan 17 00:22:32.630374 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:36714.service: Deactivated successfully. Jan 17 00:22:32.632306 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:22:32.640555 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:22:32.645191 systemd-logind[1580]: Removed session 6. Jan 17 00:22:32.684590 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:22:32.687061 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:32.698312 systemd-logind[1580]: New session 7 of user core. Jan 17 00:22:32.710561 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:22:32.779813 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:22:32.780975 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:33.401777 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:22:33.402300 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:22:33.857389 dockerd[1818]: time="2026-01-17T00:22:33.857127545Z" level=info msg="Starting up" Jan 17 00:22:34.215585 dockerd[1818]: time="2026-01-17T00:22:34.215216533Z" level=info msg="Loading containers: start." Jan 17 00:22:34.399661 kernel: Initializing XFRM netlink socket Jan 17 00:22:34.559207 systemd-networkd[1265]: docker0: Link UP Jan 17 00:22:34.606382 dockerd[1818]: time="2026-01-17T00:22:34.606155342Z" level=info msg="Loading containers: done." Jan 17 00:22:34.993117 dockerd[1818]: time="2026-01-17T00:22:34.992951024Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:22:34.993803 dockerd[1818]: time="2026-01-17T00:22:34.993342666Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:22:34.994372 dockerd[1818]: time="2026-01-17T00:22:34.994314540Z" level=info msg="Daemon has completed initialization" Jan 17 00:22:35.087796 dockerd[1818]: time="2026-01-17T00:22:35.086957847Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:22:35.106303 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:22:37.985686 containerd[1603]: time="2026-01-17T00:22:37.983881385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:22:39.173597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917073839.mount: Deactivated successfully. Jan 17 00:22:40.399157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:22:40.411902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:41.337593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:41.390829 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:41.623779 containerd[1603]: time="2026-01-17T00:22:41.623472301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:41.626729 containerd[1603]: time="2026-01-17T00:22:41.626572185Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:22:41.631630 containerd[1603]: time="2026-01-17T00:22:41.629270907Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:41.639072 containerd[1603]: time="2026-01-17T00:22:41.634122674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:41.639072 containerd[1603]: time="2026-01-17T00:22:41.637674910Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.653704008s" Jan 17 00:22:41.639072 containerd[1603]: time="2026-01-17T00:22:41.637742315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:22:41.640579 containerd[1603]: time="2026-01-17T00:22:41.640420258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:22:41.892745 kubelet[2035]: E0117 00:22:41.891143 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:41.900049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:41.900496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:43.557101 containerd[1603]: time="2026-01-17T00:22:43.555153034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:43.558752 containerd[1603]: time="2026-01-17T00:22:43.558575885Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:22:43.561088 containerd[1603]: time="2026-01-17T00:22:43.560970938Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:43.570075 containerd[1603]: time="2026-01-17T00:22:43.569623661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:43.571865 containerd[1603]: time="2026-01-17T00:22:43.571793575Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.931307904s" Jan 17 00:22:43.571865 containerd[1603]: time="2026-01-17T00:22:43.571858436Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:22:43.576024 containerd[1603]: time="2026-01-17T00:22:43.575847292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:22:45.918370 kernel: hrtimer: interrupt took 3641961 ns Jan 17 00:22:46.566171 containerd[1603]: time="2026-01-17T00:22:46.566009317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:46.569687 containerd[1603]: time="2026-01-17T00:22:46.569570797Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:22:46.572034 containerd[1603]: time="2026-01-17T00:22:46.571942906Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:46.585276 containerd[1603]: time="2026-01-17T00:22:46.585144649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:46.586583 containerd[1603]: time="2026-01-17T00:22:46.586456871Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.010558854s" Jan 17 00:22:46.586583 containerd[1603]: time="2026-01-17T00:22:46.586517384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:22:46.588876 containerd[1603]: time="2026-01-17T00:22:46.588831620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:22:49.784524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802731764.mount: Deactivated successfully. Jan 17 00:22:52.186989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:22:52.200805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:53.140490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:53.144450 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:53.465876 containerd[1603]: time="2026-01-17T00:22:53.465524369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:53.467116 containerd[1603]: time="2026-01-17T00:22:53.466987789Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:22:53.470396 containerd[1603]: time="2026-01-17T00:22:53.469147046Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:53.474287 containerd[1603]: time="2026-01-17T00:22:53.473653235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:53.475725 containerd[1603]: time="2026-01-17T00:22:53.475513028Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 6.886595358s" Jan 17 00:22:53.475725 containerd[1603]: time="2026-01-17T00:22:53.475596735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:22:53.478706 containerd[1603]: time="2026-01-17T00:22:53.477780412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:22:53.708315 kubelet[2074]: E0117 00:22:53.708054 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:53.715597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:53.716041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:54.236726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131601625.mount: Deactivated successfully. Jan 17 00:22:58.526014 containerd[1603]: time="2026-01-17T00:22:58.525890281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:58.529505 containerd[1603]: time="2026-01-17T00:22:58.529311663Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:22:58.531322 containerd[1603]: time="2026-01-17T00:22:58.531105497Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:58.536457 containerd[1603]: time="2026-01-17T00:22:58.536379187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:58.541458 containerd[1603]: time="2026-01-17T00:22:58.540927960Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 5.063110709s" Jan 17 00:22:58.541458 containerd[1603]: time="2026-01-17T00:22:58.540964428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:22:58.543883 containerd[1603]: time="2026-01-17T00:22:58.543782091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:22:59.144292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858533727.mount: Deactivated successfully. Jan 17 00:22:59.167620 containerd[1603]: time="2026-01-17T00:22:59.167476213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:59.173899 containerd[1603]: time="2026-01-17T00:22:59.173724474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:22:59.193674 containerd[1603]: time="2026-01-17T00:22:59.193447636Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:59.205217 containerd[1603]: time="2026-01-17T00:22:59.204891576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:59.210327 containerd[1603]: time="2026-01-17T00:22:59.209615258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 665.292366ms" Jan 17 00:22:59.210327 containerd[1603]: time="2026-01-17T00:22:59.210291459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:22:59.213474 containerd[1603]: time="2026-01-17T00:22:59.213215242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:22:59.934893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068038535.mount: Deactivated successfully. Jan 17 00:23:03.927907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:23:03.951057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:04.294960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:04.306886 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:23:04.408463 kubelet[2203]: E0117 00:23:04.408211 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:23:04.413436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:23:04.413824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:23:07.537771 containerd[1603]: time="2026-01-17T00:23:07.537680780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:07.538812 containerd[1603]: time="2026-01-17T00:23:07.538698711Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:23:07.540792 containerd[1603]: time="2026-01-17T00:23:07.540680270Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:07.544493 containerd[1603]: time="2026-01-17T00:23:07.544431031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:07.548819 containerd[1603]: time="2026-01-17T00:23:07.548730640Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.335405792s" Jan 17 00:23:07.548819 containerd[1603]: time="2026-01-17T00:23:07.548801941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:23:09.929931 update_engine[1588]: I20260117 00:23:09.929653 1588 update_attempter.cc:509] Updating boot flags... Jan 17 00:23:10.016313 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2244) Jan 17 00:23:10.110927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2245) Jan 17 00:23:10.189292 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2245) Jan 17 00:23:11.530664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:11.560886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:11.613809 systemd[1]: Reloading requested from client PID 2260 ('systemctl') (unit session-7.scope)... Jan 17 00:23:11.613867 systemd[1]: Reloading... Jan 17 00:23:11.777366 zram_generator::config[2305]: No configuration found. Jan 17 00:23:12.388014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:23:12.503871 systemd[1]: Reloading finished in 889 ms. Jan 17 00:23:12.689718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:12.714702 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:23:12.720817 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:12.723394 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:23:12.725868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:12.755626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:13.080795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:13.091068 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:23:13.490734 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:23:13.490734 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:23:13.490734 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:23:13.491519 kubelet[2362]: I0117 00:23:13.490816 2362 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:23:14.073387 kubelet[2362]: I0117 00:23:14.072904 2362 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:23:14.073387 kubelet[2362]: I0117 00:23:14.073197 2362 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:23:14.074153 kubelet[2362]: I0117 00:23:14.073922 2362 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:23:14.169321 kubelet[2362]: I0117 00:23:14.168434 2362 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:23:14.381996 kubelet[2362]: E0117 00:23:14.381333 2362 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:14.398494 kubelet[2362]: E0117 00:23:14.398287 2362 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:23:14.398494 kubelet[2362]: I0117 00:23:14.398333 2362 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:23:14.421139 kubelet[2362]: I0117 00:23:14.419157 2362 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:23:14.423706 kubelet[2362]: I0117 00:23:14.422994 2362 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:23:14.424513 kubelet[2362]: I0117 00:23:14.423742 2362 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:23:14.424513 kubelet[2362]: I0117 00:23:14.424491 2362 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:23:14.424513 kubelet[2362]: I0117 00:23:14.424509 2362 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:23:14.425315 kubelet[2362]: I0117 00:23:14.425077 2362 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:23:14.431771 kubelet[2362]: I0117 00:23:14.430348 2362 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:23:14.431771 kubelet[2362]: I0117 00:23:14.430458 2362 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:23:14.431771 kubelet[2362]: I0117 00:23:14.430497 2362 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:23:14.431771 kubelet[2362]: I0117 00:23:14.430553 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:23:14.445159 kubelet[2362]: W0117 00:23:14.444399 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:14.445159 kubelet[2362]: I0117 00:23:14.444500 2362 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:23:14.445159 kubelet[2362]: E0117 00:23:14.444502 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:14.445159 kubelet[2362]: W0117 00:23:14.444467 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:14.445159 kubelet[2362]: E0117 00:23:14.444739 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:14.446152 kubelet[2362]: I0117 00:23:14.446106 2362 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:23:14.455482 kubelet[2362]: W0117 00:23:14.455339 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:23:14.463313 kubelet[2362]: I0117 00:23:14.462552 2362 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:23:14.463313 kubelet[2362]: I0117 00:23:14.462824 2362 server.go:1287] "Started kubelet" Jan 17 00:23:14.463434 kubelet[2362]: I0117 00:23:14.463289 2362 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:23:14.465725 kubelet[2362]: I0117 00:23:14.465143 2362 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:23:14.473877 kubelet[2362]: I0117 00:23:14.473336 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:23:14.473877 kubelet[2362]: I0117 00:23:14.473856 2362 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:23:14.476001 kubelet[2362]: I0117 00:23:14.475934 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:23:14.484625 kubelet[2362]: I0117 00:23:14.477103 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:23:14.484963 kubelet[2362]: I0117 00:23:14.484831 2362 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:23:14.488054 kubelet[2362]: I0117 00:23:14.488020 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:23:14.488717 kubelet[2362]: I0117 00:23:14.486061 2362 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:23:14.488874 kubelet[2362]: E0117 00:23:14.486677 2362 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:23:14.496071 kubelet[2362]: E0117 00:23:14.491629 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5cdfaf5d813e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:23:14.462654782 +0000 UTC m=+1.344173490,LastTimestamp:2026-01-17 00:23:14.462654782 +0000 UTC m=+1.344173490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:23:14.496071 kubelet[2362]: I0117 00:23:14.485998 2362 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:23:14.496071 kubelet[2362]: I0117 00:23:14.494100 2362 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:23:14.496071 kubelet[2362]: W0117 00:23:14.495114 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:14.496071 kubelet[2362]: E0117 00:23:14.495445 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:14.496071 kubelet[2362]: E0117 00:23:14.495705 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Jan 17 00:23:14.499920 kubelet[2362]: E0117 00:23:14.499627 2362 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:23:14.500502 kubelet[2362]: I0117 00:23:14.500434 2362 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:23:14.554336 kubelet[2362]: I0117 00:23:14.553971 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:23:14.559010 kubelet[2362]: I0117 00:23:14.557849 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:23:14.559010 kubelet[2362]: I0117 00:23:14.557879 2362 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:23:14.559010 kubelet[2362]: I0117 00:23:14.558913 2362 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:23:14.559010 kubelet[2362]: I0117 00:23:14.558964 2362 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:23:14.559447 kubelet[2362]: E0117 00:23:14.559036 2362 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:23:14.562455 kubelet[2362]: I0117 00:23:14.561992 2362 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:23:14.562455 kubelet[2362]: I0117 00:23:14.562040 2362 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:23:14.562455 kubelet[2362]: I0117 00:23:14.562063 2362 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:23:14.562836 kubelet[2362]: W0117 00:23:14.562372 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:14.562836 kubelet[2362]: E0117 00:23:14.562540 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:14.589479 kubelet[2362]: E0117 00:23:14.589334 2362 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:23:14.635971 kubelet[2362]: I0117 00:23:14.635456 2362 policy_none.go:49] "None policy: Start" Jan 17 00:23:14.635971 kubelet[2362]: I0117 00:23:14.635617 2362 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:23:14.635971 kubelet[2362]: I0117 00:23:14.635813 2362 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:23:14.660138 kubelet[2362]: E0117 00:23:14.659112 2362 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:23:14.662282 kubelet[2362]: I0117 00:23:14.662153 2362 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:23:14.662940 kubelet[2362]: I0117 00:23:14.662886 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:23:14.663092 kubelet[2362]: I0117 00:23:14.663008 2362 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:23:14.665948 kubelet[2362]: I0117 00:23:14.665853 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:23:14.669120 kubelet[2362]: E0117 00:23:14.668971 2362 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:23:14.669120 kubelet[2362]: E0117 00:23:14.669042 2362 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:23:14.696947 kubelet[2362]: E0117 00:23:14.696786 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Jan 17 00:23:14.775130 kubelet[2362]: I0117 00:23:14.774988 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:14.776737 kubelet[2362]: E0117 00:23:14.775901 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 17 00:23:14.884896 kubelet[2362]: E0117 00:23:14.882674 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:14.892387 kubelet[2362]: E0117 00:23:14.889996 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:14.898107 kubelet[2362]: E0117 00:23:14.898009 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:14.898539 kubelet[2362]: I0117 00:23:14.898500 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4af11cf2b8ce2176c204065d7f050cac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4af11cf2b8ce2176c204065d7f050cac\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:14.898539 kubelet[2362]: I0117 00:23:14.898543 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4af11cf2b8ce2176c204065d7f050cac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4af11cf2b8ce2176c204065d7f050cac\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:14.898539 kubelet[2362]: I0117 00:23:14.898635 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:14.898841 kubelet[2362]: I0117 00:23:14.898662 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:14.898841 kubelet[2362]: I0117 00:23:14.898683 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:14.898841 kubelet[2362]: I0117 00:23:14.898702 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4af11cf2b8ce2176c204065d7f050cac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4af11cf2b8ce2176c204065d7f050cac\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:14.898841 kubelet[2362]: I0117 00:23:14.898719 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:14.898841 kubelet[2362]: I0117 00:23:14.898793 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:14.899360 kubelet[2362]: I0117 00:23:14.899064 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:14.980885 kubelet[2362]: I0117 00:23:14.980852 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:14.995844 kubelet[2362]: E0117 00:23:14.994899 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 17 00:23:15.099087 kubelet[2362]: E0117 00:23:15.098724 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Jan 17 00:23:15.184736 kubelet[2362]: E0117 00:23:15.184428 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:15.189793 containerd[1603]: time="2026-01-17T00:23:15.185830540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:15.194185 kubelet[2362]: E0117 00:23:15.193462 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:15.194344 containerd[1603]: time="2026-01-17T00:23:15.193968191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:15.200566 kubelet[2362]: E0117 00:23:15.199126 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:15.200672 containerd[1603]: time="2026-01-17T00:23:15.199729587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4af11cf2b8ce2176c204065d7f050cac,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:15.294467 kubelet[2362]: W0117 00:23:15.294038 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:15.294467 kubelet[2362]: E0117 00:23:15.294200 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:15.322017 kubelet[2362]: E0117 00:23:15.321866 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5cdfaf5d813e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:23:14.462654782 +0000 UTC m=+1.344173490,LastTimestamp:2026-01-17 00:23:14.462654782 +0000 UTC m=+1.344173490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:23:15.333767 kubelet[2362]: W0117 00:23:15.333722 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:15.333918 kubelet[2362]: E0117 00:23:15.333891 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:15.400058 kubelet[2362]: I0117 00:23:15.398766 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:15.400058 kubelet[2362]: E0117 00:23:15.399129 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 17 00:23:15.684410 kubelet[2362]: W0117 00:23:15.683927 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:15.684410 kubelet[2362]: E0117 00:23:15.684523 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:15.905736 kubelet[2362]: W0117 00:23:15.875976 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:15.914953 kubelet[2362]: E0117 00:23:15.911531 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:15.966909 kubelet[2362]: E0117 00:23:15.960493 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Jan 17 00:23:16.212200 kubelet[2362]: I0117 00:23:16.211341 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:16.217834 kubelet[2362]: E0117 00:23:16.216842 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 17 00:23:16.602062 kubelet[2362]: E0117 00:23:16.601503 2362 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:17.028368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474855889.mount: Deactivated successfully. Jan 17 00:23:17.068431 containerd[1603]: time="2026-01-17T00:23:17.066340517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:23:17.081053 containerd[1603]: time="2026-01-17T00:23:17.080834103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:23:17.090704 containerd[1603]: time="2026-01-17T00:23:17.090423411Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:23:17.093659 containerd[1603]: time="2026-01-17T00:23:17.092417300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:23:17.103128 containerd[1603]: time="2026-01-17T00:23:17.101125633Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:23:17.105187 containerd[1603]: time="2026-01-17T00:23:17.105043879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:23:17.107689 containerd[1603]: time="2026-01-17T00:23:17.107082405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:23:17.108397 containerd[1603]: time="2026-01-17T00:23:17.108325396Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.90852769s" Jan 17 00:23:17.113914 containerd[1603]: time="2026-01-17T00:23:17.113755031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:23:17.124469 containerd[1603]: time="2026-01-17T00:23:17.124380782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.930347309s" Jan 17 00:23:17.133045 containerd[1603]: time="2026-01-17T00:23:17.132802847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.944004135s" Jan 17 00:23:17.166574 kubelet[2362]: W0117 00:23:17.166217 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:17.168057 kubelet[2362]: E0117 00:23:17.167888 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:17.582957 kubelet[2362]: W0117 00:23:17.582626 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:17.582957 kubelet[2362]: E0117 00:23:17.582976 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:17.582957 kubelet[2362]: E0117 00:23:17.582523 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="3.2s" Jan 17 00:23:17.671677 kubelet[2362]: W0117 00:23:17.671388 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:17.671677 kubelet[2362]: E0117 00:23:17.671634 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:17.823339 kubelet[2362]: I0117 00:23:17.823143 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:17.826159 kubelet[2362]: E0117 00:23:17.824646 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 17 00:23:18.082841 containerd[1603]: time="2026-01-17T00:23:18.080642743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:18.082841 containerd[1603]: time="2026-01-17T00:23:18.025958630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.083042062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.083064004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.084938723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.082861823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.083037932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.084338821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.082873974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.083093504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.083109554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.085972 containerd[1603]: time="2026-01-17T00:23:18.084334732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.895051 containerd[1603]: time="2026-01-17T00:23:18.893079357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4af11cf2b8ce2176c204065d7f050cac,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fb681991ba577d7b2e534e5dd287f28dbbeae5090a2990d09fe12cae5eb3c33\"" Jan 17 00:23:19.029915 kubelet[2362]: W0117 00:23:19.029477 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Jan 17 00:23:19.029915 kubelet[2362]: E0117 00:23:19.029540 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:23:19.031443 kubelet[2362]: E0117 00:23:19.030216 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:19.031506 containerd[1603]: time="2026-01-17T00:23:19.030700401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"af627ad3e1165790ba5067bb47ab00464d2fe0004527ae75ea96e5b102bce6b2\"" Jan 17 00:23:19.044142 containerd[1603]: time="2026-01-17T00:23:19.036640961Z" level=info msg="CreateContainer within sandbox \"0fb681991ba577d7b2e534e5dd287f28dbbeae5090a2990d09fe12cae5eb3c33\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:23:19.044142 containerd[1603]: time="2026-01-17T00:23:19.042091667Z" level=info msg="CreateContainer within sandbox \"af627ad3e1165790ba5067bb47ab00464d2fe0004527ae75ea96e5b102bce6b2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:23:19.044460 kubelet[2362]: E0117 00:23:19.036650 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:19.081045 containerd[1603]: time="2026-01-17T00:23:19.080900640Z" level=info msg="CreateContainer within sandbox \"0fb681991ba577d7b2e534e5dd287f28dbbeae5090a2990d09fe12cae5eb3c33\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c0084a678b6798a6537e6a592d473e3da798de5224d77ee65b96682e2a8e59f\"" Jan 17 00:23:19.083686 containerd[1603]: time="2026-01-17T00:23:19.082532224Z" level=info msg="StartContainer for \"6c0084a678b6798a6537e6a592d473e3da798de5224d77ee65b96682e2a8e59f\"" Jan 17 00:23:19.110275 containerd[1603]: time="2026-01-17T00:23:19.107465958Z" level=info msg="CreateContainer within sandbox \"af627ad3e1165790ba5067bb47ab00464d2fe0004527ae75ea96e5b102bce6b2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c8de0a63e6cc1e0cd850c0a821c207e87b2f5e7581611bac81853861d335195\"" Jan 17 00:23:19.113538 containerd[1603]: time="2026-01-17T00:23:19.110952618Z" level=info msg="StartContainer for \"1c8de0a63e6cc1e0cd850c0a821c207e87b2f5e7581611bac81853861d335195\"" Jan 17 00:23:19.131858 containerd[1603]: time="2026-01-17T00:23:19.131205661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf85ce5efc75de3972b0a913bc40dd79028cd117d7d74c61dbf45c1e842ff411\"" Jan 17 00:23:19.133450 kubelet[2362]: E0117 00:23:19.133208 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:19.138400 containerd[1603]: time="2026-01-17T00:23:19.138116405Z" level=info msg="CreateContainer within sandbox \"cf85ce5efc75de3972b0a913bc40dd79028cd117d7d74c61dbf45c1e842ff411\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:23:19.179549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846720916.mount: Deactivated successfully. Jan 17 00:23:19.237721 containerd[1603]: time="2026-01-17T00:23:19.237459153Z" level=info msg="CreateContainer within sandbox \"cf85ce5efc75de3972b0a913bc40dd79028cd117d7d74c61dbf45c1e842ff411\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6deaa1351d13eddd45094cd1612a50c60ffe6901fb1bfcb103a8e6daea75a1a7\"" Jan 17 00:23:19.272301 containerd[1603]: time="2026-01-17T00:23:19.271545528Z" level=info msg="StartContainer for \"6deaa1351d13eddd45094cd1612a50c60ffe6901fb1bfcb103a8e6daea75a1a7\"" Jan 17 00:23:19.898800 containerd[1603]: time="2026-01-17T00:23:19.898219601Z" level=info msg="StartContainer for \"6c0084a678b6798a6537e6a592d473e3da798de5224d77ee65b96682e2a8e59f\" returns successfully" Jan 17 00:23:20.009731 containerd[1603]: time="2026-01-17T00:23:20.008960972Z" level=info msg="StartContainer for \"1c8de0a63e6cc1e0cd850c0a821c207e87b2f5e7581611bac81853861d335195\" returns successfully" Jan 17 00:23:20.067523 kubelet[2362]: E0117 00:23:20.067468 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:20.069512 kubelet[2362]: E0117 00:23:20.068467 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:20.078361 kubelet[2362]: E0117 00:23:20.078308 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:20.078577 kubelet[2362]: E0117 00:23:20.078501 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:20.423114 containerd[1603]: time="2026-01-17T00:23:20.422552925Z" level=info msg="StartContainer for \"6deaa1351d13eddd45094cd1612a50c60ffe6901fb1bfcb103a8e6daea75a1a7\" returns successfully" Jan 17 00:23:21.057956 kubelet[2362]: I0117 00:23:21.056851 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:21.194886 kubelet[2362]: E0117 00:23:21.189562 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:21.214391 kubelet[2362]: E0117 00:23:21.197581 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:21.220041 kubelet[2362]: E0117 00:23:21.217731 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:21.220041 kubelet[2362]: E0117 00:23:21.219723 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:22.201857 kubelet[2362]: E0117 00:23:22.201786 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:22.202755 kubelet[2362]: E0117 00:23:22.202037 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:22.217297 kubelet[2362]: E0117 00:23:22.215106 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:22.231960 kubelet[2362]: E0117 00:23:22.231802 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:23.284880 kubelet[2362]: E0117 00:23:23.284549 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:23.284880 kubelet[2362]: E0117 00:23:23.284909 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:23.638665 kubelet[2362]: E0117 00:23:23.637833 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:23.638665 kubelet[2362]: E0117 00:23:23.638119 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:24.688085 kubelet[2362]: E0117 00:23:24.687494 2362 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:23:25.613746 kubelet[2362]: E0117 00:23:25.611098 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:25.619065 kubelet[2362]: E0117 00:23:25.618588 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:26.487203 kubelet[2362]: E0117 00:23:26.485387 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:23:26.493381 kubelet[2362]: E0117 00:23:26.492462 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:30.634451 kubelet[2362]: I0117 00:23:30.630893 2362 apiserver.go:52] "Watching apiserver" Jan 17 00:23:30.698861 kubelet[2362]: I0117 00:23:30.690975 2362 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:23:31.109554 kubelet[2362]: I0117 00:23:31.109383 2362 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:23:31.116123 kubelet[2362]: E0117 00:23:31.115200 2362 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5cdfaf5d813e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:23:14.462654782 +0000 UTC m=+1.344173490,LastTimestamp:2026-01-17 00:23:14.462654782 +0000 UTC m=+1.344173490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:23:31.187877 kubelet[2362]: I0117 00:23:31.187693 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:31.211085 kubelet[2362]: E0117 00:23:31.210982 2362 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:31.211681 kubelet[2362]: I0117 00:23:31.211372 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:31.216088 kubelet[2362]: E0117 00:23:31.215810 2362 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:31.216088 kubelet[2362]: I0117 00:23:31.215853 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:31.219955 kubelet[2362]: E0117 00:23:31.219867 2362 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:35.636700 kubelet[2362]: I0117 00:23:35.636539 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:35.700486 kubelet[2362]: E0117 00:23:35.697458 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:36.201046 kubelet[2362]: E0117 00:23:36.198820 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:36.342058 kubelet[2362]: I0117 00:23:36.341989 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:36.713188 kubelet[2362]: E0117 00:23:36.383880 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:36.903293 kubelet[2362]: I0117 00:23:36.902961 2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.902906211 podStartE2EDuration="1.902906211s" podCreationTimestamp="2026-01-17 00:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:23:36.900420517 +0000 UTC m=+23.781939255" watchObservedRunningTime="2026-01-17 00:23:36.902906211 +0000 UTC m=+23.784424919" Jan 17 00:23:37.237909 kubelet[2362]: E0117 00:23:37.237522 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:37.282433 systemd[1]: Reloading requested from client PID 2642 ('systemctl') (unit session-7.scope)... Jan 17 00:23:37.282455 systemd[1]: Reloading... Jan 17 00:23:37.635377 zram_generator::config[2678]: No configuration found. Jan 17 00:23:38.107167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:23:38.303969 systemd[1]: Reloading finished in 1020 ms. Jan 17 00:23:38.379458 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:38.397546 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:23:38.398417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:38.412044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:38.722399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:23:38.745122 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:23:38.938409 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:23:38.938409 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:23:38.938409 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:23:38.938409 kubelet[2735]: I0117 00:23:38.936955 2735 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:23:39.001094 kubelet[2735]: I0117 00:23:38.997991 2735 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:23:39.001094 kubelet[2735]: I0117 00:23:38.998301 2735 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:23:39.001094 kubelet[2735]: I0117 00:23:39.000532 2735 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:23:39.007758 kubelet[2735]: I0117 00:23:39.007191 2735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:23:39.066297 kubelet[2735]: I0117 00:23:39.066174 2735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:23:39.080065 kubelet[2735]: E0117 00:23:39.079850 2735 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:23:39.080065 kubelet[2735]: I0117 00:23:39.079999 2735 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:23:39.145294 kubelet[2735]: I0117 00:23:39.127485 2735 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:23:39.169143 kubelet[2735]: I0117 00:23:39.167912 2735 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:23:39.169143 kubelet[2735]: I0117 00:23:39.168067 2735 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:23:39.169143 kubelet[2735]: I0117 00:23:39.168594 2735 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:23:39.169143 kubelet[2735]: I0117 00:23:39.168611 2735 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:23:39.171170 kubelet[2735]: I0117 00:23:39.169600 2735 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:23:39.171170 kubelet[2735]: I0117 00:23:39.169918 2735 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:23:39.171170 kubelet[2735]: I0117 00:23:39.170070 2735 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:23:39.171170 kubelet[2735]: I0117 00:23:39.170099 2735 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:23:39.171170 kubelet[2735]: I0117 00:23:39.170112 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:23:39.180878 kubelet[2735]: I0117 00:23:39.178738 2735 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:23:39.181999 kubelet[2735]: I0117 00:23:39.181977 2735 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:23:39.183071 kubelet[2735]: I0117 00:23:39.182957 2735 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:23:39.183071 kubelet[2735]: I0117 00:23:39.183032 2735 server.go:1287] "Started kubelet" Jan 17 00:23:39.200557 kubelet[2735]: I0117 00:23:39.200475 2735 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:23:39.206317 kubelet[2735]: I0117 00:23:39.206098 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:23:39.209504 kubelet[2735]: I0117 00:23:39.209447 2735 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:23:39.209852 kubelet[2735]: I0117 00:23:39.209779 2735 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:23:39.220730 kubelet[2735]: E0117 00:23:39.213934 2735 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:23:39.220730 kubelet[2735]: I0117 00:23:39.215194 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:23:39.220730 kubelet[2735]: I0117 00:23:39.215994 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:23:39.220730 kubelet[2735]: I0117 00:23:39.220075 2735 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:23:39.220730 kubelet[2735]: I0117 00:23:39.220352 2735 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:23:39.220730 kubelet[2735]: I0117 00:23:39.220548 2735 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:23:39.226120 kubelet[2735]: I0117 00:23:39.226060 2735 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:23:39.226320 kubelet[2735]: I0117 00:23:39.226206 2735 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:23:39.232165 kubelet[2735]: I0117 00:23:39.231437 2735 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:23:39.398759 kubelet[2735]: I0117 00:23:39.383159 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:23:39.430722 kubelet[2735]: I0117 00:23:39.424335 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:23:39.430722 kubelet[2735]: I0117 00:23:39.424586 2735 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:23:39.430722 kubelet[2735]: I0117 00:23:39.424712 2735 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:23:39.430722 kubelet[2735]: I0117 00:23:39.424724 2735 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:23:39.430722 kubelet[2735]: E0117 00:23:39.424879 2735 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:23:39.531051 kubelet[2735]: E0117 00:23:39.529534 2735 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:23:39.734793 kubelet[2735]: E0117 00:23:39.734760 2735 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:23:39.981129 kubelet[2735]: I0117 00:23:39.981051 2735 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:23:39.981129 kubelet[2735]: I0117 00:23:39.981105 2735 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:23:39.981129 kubelet[2735]: I0117 00:23:39.981130 2735 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:23:39.981864 kubelet[2735]: I0117 00:23:39.981532 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:23:39.981864 kubelet[2735]: I0117 00:23:39.981550 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:23:39.981864 kubelet[2735]: I0117 00:23:39.981578 2735 policy_none.go:49] "None policy: Start" Jan 17 00:23:39.981864 kubelet[2735]: I0117 00:23:39.981592 2735 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:23:39.981864 kubelet[2735]: I0117 00:23:39.981607 2735 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:23:39.982673 kubelet[2735]: I0117 00:23:39.982600 2735 state_mem.go:75] "Updated machine memory state" Jan 17 00:23:39.987378 kubelet[2735]: I0117 00:23:39.987201 2735 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:23:39.987583 kubelet[2735]: I0117 00:23:39.987516 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:23:39.987682 kubelet[2735]: I0117 00:23:39.987565 2735 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:23:39.991832 kubelet[2735]: I0117 00:23:39.988952 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:23:40.009783 kubelet[2735]: E0117 00:23:40.005060 2735 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:23:40.175795 kubelet[2735]: I0117 00:23:40.174958 2735 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:23:40.179597 kubelet[2735]: I0117 00:23:40.176609 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:40.179597 kubelet[2735]: I0117 00:23:40.178601 2735 apiserver.go:52] "Watching apiserver" Jan 17 00:23:40.179826 kubelet[2735]: I0117 00:23:40.179696 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.181008 kubelet[2735]: I0117 00:23:40.180763 2735 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:40.227956 kubelet[2735]: I0117 00:23:40.223452 2735 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:23:40.312503 kubelet[2735]: I0117 00:23:40.311518 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.312503 kubelet[2735]: I0117 00:23:40.311614 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.312503 kubelet[2735]: E0117 00:23:40.311701 2735 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.312503 kubelet[2735]: I0117 00:23:40.311697 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:40.312503 kubelet[2735]: I0117 00:23:40.312369 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4af11cf2b8ce2176c204065d7f050cac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4af11cf2b8ce2176c204065d7f050cac\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:40.312503 kubelet[2735]: I0117 00:23:40.312391 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4af11cf2b8ce2176c204065d7f050cac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4af11cf2b8ce2176c204065d7f050cac\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:40.319811 kubelet[2735]: I0117 00:23:40.312418 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4af11cf2b8ce2176c204065d7f050cac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4af11cf2b8ce2176c204065d7f050cac\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:23:40.319811 kubelet[2735]: I0117 00:23:40.312445 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.319811 kubelet[2735]: I0117 00:23:40.312471 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.319811 kubelet[2735]: I0117 00:23:40.312526 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:23:40.319811 kubelet[2735]: E0117 00:23:40.313112 2735 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:23:40.338206 kubelet[2735]: I0117 00:23:40.337333 2735 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:23:40.338206 kubelet[2735]: I0117 00:23:40.337546 2735 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:23:40.618075 kubelet[2735]: E0117 00:23:40.615879 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:40.618075 kubelet[2735]: E0117 00:23:40.617792 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:40.620818 kubelet[2735]: E0117 00:23:40.618612 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:40.704673 kubelet[2735]: E0117 00:23:40.704556 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:40.705553 kubelet[2735]: E0117 00:23:40.705333 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:40.706007 kubelet[2735]: E0117 00:23:40.705487 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:40.723671 kubelet[2735]: I0117 00:23:40.723370 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.723286485 podStartE2EDuration="723.286485ms" podCreationTimestamp="2026-01-17 00:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:23:40.722841845 +0000 UTC m=+1.937795987" watchObservedRunningTime="2026-01-17 00:23:40.723286485 +0000 UTC m=+1.938240597" Jan 17 00:23:41.174743 kubelet[2735]: I0117 00:23:41.172625 2735 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:23:41.179411 containerd[1603]: time="2026-01-17T00:23:41.177804502Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:23:41.183142 kubelet[2735]: I0117 00:23:41.181965 2735 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:23:41.533705 kubelet[2735]: I0117 00:23:41.532735 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltr5t\" (UniqueName: \"kubernetes.io/projected/17c698d7-09e9-4fba-8f8c-71b9b1ac0e20-kube-api-access-ltr5t\") pod \"kube-proxy-pmh2b\" (UID: \"17c698d7-09e9-4fba-8f8c-71b9b1ac0e20\") " pod="kube-system/kube-proxy-pmh2b" Jan 17 00:23:41.533705 kubelet[2735]: I0117 00:23:41.532854 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17c698d7-09e9-4fba-8f8c-71b9b1ac0e20-lib-modules\") pod \"kube-proxy-pmh2b\" (UID: \"17c698d7-09e9-4fba-8f8c-71b9b1ac0e20\") " pod="kube-system/kube-proxy-pmh2b" Jan 17 00:23:41.533705 kubelet[2735]: I0117 00:23:41.532885 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17c698d7-09e9-4fba-8f8c-71b9b1ac0e20-kube-proxy\") pod \"kube-proxy-pmh2b\" (UID: \"17c698d7-09e9-4fba-8f8c-71b9b1ac0e20\") " pod="kube-system/kube-proxy-pmh2b" Jan 17 00:23:41.533705 kubelet[2735]: I0117 00:23:41.532911 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17c698d7-09e9-4fba-8f8c-71b9b1ac0e20-xtables-lock\") pod \"kube-proxy-pmh2b\" (UID: \"17c698d7-09e9-4fba-8f8c-71b9b1ac0e20\") " pod="kube-system/kube-proxy-pmh2b" Jan 17 00:23:41.723770 kubelet[2735]: E0117 00:23:41.723218 2735 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:23:41.723770 kubelet[2735]: E0117 00:23:41.723431 2735 projected.go:194] Error preparing data for projected volume kube-api-access-ltr5t for pod kube-system/kube-proxy-pmh2b: configmap "kube-root-ca.crt" not found Jan 17 00:23:41.724382 kubelet[2735]: E0117 00:23:41.723724 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17c698d7-09e9-4fba-8f8c-71b9b1ac0e20-kube-api-access-ltr5t podName:17c698d7-09e9-4fba-8f8c-71b9b1ac0e20 nodeName:}" failed. No retries permitted until 2026-01-17 00:23:42.223693822 +0000 UTC m=+3.438647924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltr5t" (UniqueName: "kubernetes.io/projected/17c698d7-09e9-4fba-8f8c-71b9b1ac0e20-kube-api-access-ltr5t") pod "kube-proxy-pmh2b" (UID: "17c698d7-09e9-4fba-8f8c-71b9b1ac0e20") : configmap "kube-root-ca.crt" not found Jan 17 00:23:41.730190 kubelet[2735]: E0117 00:23:41.728847 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:41.730190 kubelet[2735]: E0117 00:23:41.729510 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:42.591402 kubelet[2735]: E0117 00:23:42.590810 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:42.593866 containerd[1603]: time="2026-01-17T00:23:42.592726183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pmh2b,Uid:17c698d7-09e9-4fba-8f8c-71b9b1ac0e20,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:42.644079 kubelet[2735]: I0117 00:23:42.642071 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcr2\" (UniqueName: \"kubernetes.io/projected/f6b4a884-1472-4d9a-9592-1e698bb63b3c-kube-api-access-dlcr2\") pod \"tigera-operator-7dcd859c48-mqk92\" (UID: \"f6b4a884-1472-4d9a-9592-1e698bb63b3c\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqk92" Jan 17 00:23:42.644079 kubelet[2735]: I0117 00:23:42.642303 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f6b4a884-1472-4d9a-9592-1e698bb63b3c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mqk92\" (UID: \"f6b4a884-1472-4d9a-9592-1e698bb63b3c\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqk92" Jan 17 00:23:42.823429 containerd[1603]: time="2026-01-17T00:23:42.821455957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:42.823429 containerd[1603]: time="2026-01-17T00:23:42.821523373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:42.823429 containerd[1603]: time="2026-01-17T00:23:42.821564760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:42.823429 containerd[1603]: time="2026-01-17T00:23:42.821786404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:42.968719 containerd[1603]: time="2026-01-17T00:23:42.968437140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqk92,Uid:f6b4a884-1472-4d9a-9592-1e698bb63b3c,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:23:43.027938 containerd[1603]: time="2026-01-17T00:23:43.027786403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pmh2b,Uid:17c698d7-09e9-4fba-8f8c-71b9b1ac0e20,Namespace:kube-system,Attempt:0,} returns sandbox id \"68ec5278e26b8cc88664661ac9c70abfad312b83abb5c2ab2c9e41cc5b8a0f50\"" Jan 17 00:23:43.033361 kubelet[2735]: E0117 00:23:43.029817 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:43.035840 containerd[1603]: time="2026-01-17T00:23:43.035730186Z" level=info msg="CreateContainer within sandbox \"68ec5278e26b8cc88664661ac9c70abfad312b83abb5c2ab2c9e41cc5b8a0f50\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:23:43.087090 containerd[1603]: time="2026-01-17T00:23:43.084135909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:43.087090 containerd[1603]: time="2026-01-17T00:23:43.084214626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:43.087090 containerd[1603]: time="2026-01-17T00:23:43.084312249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:43.087090 containerd[1603]: time="2026-01-17T00:23:43.084456377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:43.112884 containerd[1603]: time="2026-01-17T00:23:43.112484023Z" level=info msg="CreateContainer within sandbox \"68ec5278e26b8cc88664661ac9c70abfad312b83abb5c2ab2c9e41cc5b8a0f50\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3310420e0861ae59053fd9b149f0102d7c5d1d89c3761912609fbea7c40fe6a4\"" Jan 17 00:23:43.116086 containerd[1603]: time="2026-01-17T00:23:43.114311386Z" level=info msg="StartContainer for \"3310420e0861ae59053fd9b149f0102d7c5d1d89c3761912609fbea7c40fe6a4\"" Jan 17 00:23:43.277967 containerd[1603]: time="2026-01-17T00:23:43.277571051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqk92,Uid:f6b4a884-1472-4d9a-9592-1e698bb63b3c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"361b669b7511867d6026ecf3e4205f2c0bc465e9f0d41db94e118c8cea0ccc06\"" Jan 17 00:23:43.292898 containerd[1603]: time="2026-01-17T00:23:43.291738878Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:23:43.338783 containerd[1603]: time="2026-01-17T00:23:43.338709933Z" level=info msg="StartContainer for \"3310420e0861ae59053fd9b149f0102d7c5d1d89c3761912609fbea7c40fe6a4\" returns successfully" Jan 17 00:23:43.762506 kubelet[2735]: E0117 00:23:43.759490 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:43.822018 kubelet[2735]: I0117 00:23:43.821803 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pmh2b" podStartSLOduration=2.821780433 podStartE2EDuration="2.821780433s" podCreationTimestamp="2026-01-17 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:23:43.821521299 +0000 UTC m=+5.036475402" watchObservedRunningTime="2026-01-17 00:23:43.821780433 +0000 UTC m=+5.036734535" Jan 17 00:23:44.473748 kubelet[2735]: E0117 00:23:44.473709 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:44.765188 kubelet[2735]: E0117 00:23:44.764965 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:44.992399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395809088.mount: Deactivated successfully. Jan 17 00:23:46.107469 kubelet[2735]: E0117 00:23:46.106931 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:46.142300 kubelet[2735]: E0117 00:23:46.141391 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:46.844820 kubelet[2735]: E0117 00:23:46.844399 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:46.844820 kubelet[2735]: E0117 00:23:46.844788 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:23:48.768142 containerd[1603]: time="2026-01-17T00:23:48.767584980Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:48.770065 containerd[1603]: time="2026-01-17T00:23:48.769813511Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:23:48.772495 containerd[1603]: time="2026-01-17T00:23:48.772289052Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:48.778394 containerd[1603]: time="2026-01-17T00:23:48.778311799Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:48.780388 containerd[1603]: time="2026-01-17T00:23:48.780288478Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.488476262s" Jan 17 00:23:48.780388 containerd[1603]: time="2026-01-17T00:23:48.780369419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:23:48.790601 containerd[1603]: time="2026-01-17T00:23:48.789507122Z" level=info msg="CreateContainer within sandbox \"361b669b7511867d6026ecf3e4205f2c0bc465e9f0d41db94e118c8cea0ccc06\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:23:48.829054 containerd[1603]: time="2026-01-17T00:23:48.828942750Z" level=info msg="CreateContainer within sandbox \"361b669b7511867d6026ecf3e4205f2c0bc465e9f0d41db94e118c8cea0ccc06\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9319d0e107f43b02b590d30f5f4ff2876ad2438a5b1fc48b1b11482fde02eac3\"" Jan 17 00:23:48.831275 containerd[1603]: time="2026-01-17T00:23:48.831125832Z" level=info msg="StartContainer for \"9319d0e107f43b02b590d30f5f4ff2876ad2438a5b1fc48b1b11482fde02eac3\"" Jan 17 00:23:48.928613 systemd[1]: run-containerd-runc-k8s.io-9319d0e107f43b02b590d30f5f4ff2876ad2438a5b1fc48b1b11482fde02eac3-runc.JbH3nF.mount: Deactivated successfully. Jan 17 00:23:49.030877 containerd[1603]: time="2026-01-17T00:23:49.028334018Z" level=info msg="StartContainer for \"9319d0e107f43b02b590d30f5f4ff2876ad2438a5b1fc48b1b11482fde02eac3\" returns successfully" Jan 17 00:23:49.924149 kubelet[2735]: I0117 00:23:49.924074 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mqk92" podStartSLOduration=2.424368929 podStartE2EDuration="7.924048284s" podCreationTimestamp="2026-01-17 00:23:42 +0000 UTC" firstStartedPulling="2026-01-17 00:23:43.282525433 +0000 UTC m=+4.497479556" lastFinishedPulling="2026-01-17 00:23:48.782204809 +0000 UTC m=+9.997158911" observedRunningTime="2026-01-17 00:23:49.923325665 +0000 UTC m=+11.138279767" watchObservedRunningTime="2026-01-17 00:23:49.924048284 +0000 UTC m=+11.139002406" Jan 17 00:23:56.436476 sudo[1799]: pam_unix(sudo:session): session closed for user root Jan 17 00:23:56.445972 sshd[1793]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:56.457314 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:54588.service: Deactivated successfully. Jan 17 00:23:56.464530 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:23:56.465189 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:23:56.468419 systemd-logind[1580]: Removed session 7. Jan 17 00:24:15.703980 kubelet[2735]: I0117 00:24:15.703879 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/304ba865-efb9-4057-8232-9f07d40eb58d-typha-certs\") pod \"calico-typha-599c497958-77q6w\" (UID: \"304ba865-efb9-4057-8232-9f07d40eb58d\") " pod="calico-system/calico-typha-599c497958-77q6w" Jan 17 00:24:15.703980 kubelet[2735]: I0117 00:24:15.703969 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304ba865-efb9-4057-8232-9f07d40eb58d-tigera-ca-bundle\") pod \"calico-typha-599c497958-77q6w\" (UID: \"304ba865-efb9-4057-8232-9f07d40eb58d\") " pod="calico-system/calico-typha-599c497958-77q6w" Jan 17 00:24:15.704943 kubelet[2735]: I0117 00:24:15.704005 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698gb\" (UniqueName: \"kubernetes.io/projected/304ba865-efb9-4057-8232-9f07d40eb58d-kube-api-access-698gb\") pod \"calico-typha-599c497958-77q6w\" (UID: \"304ba865-efb9-4057-8232-9f07d40eb58d\") " pod="calico-system/calico-typha-599c497958-77q6w" Jan 17 00:24:15.749674 kubelet[2735]: E0117 00:24:15.743425 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:15.805355 kubelet[2735]: I0117 00:24:15.805190 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-cni-log-dir\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.805355 kubelet[2735]: I0117 00:24:15.805330 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92d83a58-8c44-4724-83b0-262810f46887-tigera-ca-bundle\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.805355 kubelet[2735]: I0117 00:24:15.805361 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jgw\" (UniqueName: \"kubernetes.io/projected/92d83a58-8c44-4724-83b0-262810f46887-kube-api-access-f6jgw\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.806859 kubelet[2735]: I0117 00:24:15.805396 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-flexvol-driver-host\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.806859 kubelet[2735]: I0117 00:24:15.805457 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-var-lib-calico\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.806859 kubelet[2735]: I0117 00:24:15.805484 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-cni-net-dir\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.806859 kubelet[2735]: I0117 00:24:15.805505 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-lib-modules\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.806859 kubelet[2735]: I0117 00:24:15.805531 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-var-run-calico\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.807114 kubelet[2735]: I0117 00:24:15.805585 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-xtables-lock\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.807114 kubelet[2735]: I0117 00:24:15.805675 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-cni-bin-dir\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.807114 kubelet[2735]: I0117 00:24:15.805797 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/92d83a58-8c44-4724-83b0-262810f46887-node-certs\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.807114 kubelet[2735]: I0117 00:24:15.805912 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/92d83a58-8c44-4724-83b0-262810f46887-policysync\") pod \"calico-node-p84bc\" (UID: \"92d83a58-8c44-4724-83b0-262810f46887\") " pod="calico-system/calico-node-p84bc" Jan 17 00:24:15.911584 kubelet[2735]: I0117 00:24:15.911203 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bdf7dcb1-7f01-49ed-b25d-dd851c91e195-kubelet-dir\") pod \"csi-node-driver-pzcck\" (UID: \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\") " pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:15.911584 kubelet[2735]: I0117 00:24:15.911347 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bdf7dcb1-7f01-49ed-b25d-dd851c91e195-socket-dir\") pod \"csi-node-driver-pzcck\" (UID: \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\") " pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:15.911584 kubelet[2735]: I0117 00:24:15.911374 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tf4q\" (UniqueName: \"kubernetes.io/projected/bdf7dcb1-7f01-49ed-b25d-dd851c91e195-kube-api-access-8tf4q\") pod \"csi-node-driver-pzcck\" (UID: \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\") " pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:15.911584 kubelet[2735]: I0117 00:24:15.911436 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bdf7dcb1-7f01-49ed-b25d-dd851c91e195-registration-dir\") pod \"csi-node-driver-pzcck\" (UID: \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\") " pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:15.911584 kubelet[2735]: I0117 00:24:15.911487 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bdf7dcb1-7f01-49ed-b25d-dd851c91e195-varrun\") pod \"csi-node-driver-pzcck\" (UID: \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\") " pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:15.924001 kubelet[2735]: E0117 00:24:15.920939 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:15.924001 kubelet[2735]: W0117 00:24:15.920971 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:15.924001 kubelet[2735]: E0117 00:24:15.923540 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:15.940465 kubelet[2735]: E0117 00:24:15.940323 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:15.940465 kubelet[2735]: W0117 00:24:15.940377 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:15.940465 kubelet[2735]: E0117 00:24:15.940403 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:15.991741 kubelet[2735]: E0117 00:24:15.989446 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:15.991741 kubelet[2735]: W0117 00:24:15.989472 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:15.991741 kubelet[2735]: E0117 00:24:15.989501 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.015850 kubelet[2735]: E0117 00:24:16.015356 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.015850 kubelet[2735]: W0117 00:24:16.015387 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.015850 kubelet[2735]: E0117 00:24:16.015416 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.020175 kubelet[2735]: E0117 00:24:16.020033 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.020175 kubelet[2735]: W0117 00:24:16.020099 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.020175 kubelet[2735]: E0117 00:24:16.020141 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.023443 kubelet[2735]: E0117 00:24:16.021997 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.023443 kubelet[2735]: W0117 00:24:16.022043 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.023443 kubelet[2735]: E0117 00:24:16.022338 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.026121 kubelet[2735]: E0117 00:24:16.025387 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.026121 kubelet[2735]: W0117 00:24:16.025443 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.026121 kubelet[2735]: E0117 00:24:16.025515 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.029790 kubelet[2735]: E0117 00:24:16.029667 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.029790 kubelet[2735]: W0117 00:24:16.029714 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.030013 kubelet[2735]: E0117 00:24:16.029944 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.035010 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.037942 kubelet[2735]: W0117 00:24:16.035034 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.035335 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.036277 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.037942 kubelet[2735]: W0117 00:24:16.036292 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.036421 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.036647 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.037942 kubelet[2735]: W0117 00:24:16.036658 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.036814 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.037942 kubelet[2735]: E0117 00:24:16.037079 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.038518 kubelet[2735]: W0117 00:24:16.037092 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.038518 kubelet[2735]: E0117 00:24:16.038051 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.038518 kubelet[2735]: W0117 00:24:16.038066 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.038518 kubelet[2735]: E0117 00:24:16.038469 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.038518 kubelet[2735]: W0117 00:24:16.038482 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.038740 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.040158 kubelet[2735]: W0117 00:24:16.038822 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.039084 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.040158 kubelet[2735]: W0117 00:24:16.039097 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.039116 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.039468 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.040158 kubelet[2735]: W0117 00:24:16.039482 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.039498 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.039964 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.040158 kubelet[2735]: E0117 00:24:16.039988 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.040672 kubelet[2735]: E0117 00:24:16.040028 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.040672 kubelet[2735]: E0117 00:24:16.040107 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.042949 kubelet[2735]: E0117 00:24:16.042490 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.042949 kubelet[2735]: W0117 00:24:16.042530 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.042949 kubelet[2735]: E0117 00:24:16.042555 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.043479 kubelet[2735]: E0117 00:24:16.043405 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.043479 kubelet[2735]: W0117 00:24:16.043455 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.043686 kubelet[2735]: E0117 00:24:16.043606 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.049135 kubelet[2735]: E0117 00:24:16.045094 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.049135 kubelet[2735]: W0117 00:24:16.045135 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.049135 kubelet[2735]: E0117 00:24:16.045348 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.049135 kubelet[2735]: E0117 00:24:16.047456 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.049135 kubelet[2735]: W0117 00:24:16.047469 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.049135 kubelet[2735]: E0117 00:24:16.047662 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.049667 kubelet[2735]: E0117 00:24:16.049552 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.049667 kubelet[2735]: W0117 00:24:16.049581 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.049667 kubelet[2735]: E0117 00:24:16.049621 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.054885 kubelet[2735]: E0117 00:24:16.051864 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.054885 kubelet[2735]: W0117 00:24:16.051887 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.054885 kubelet[2735]: E0117 00:24:16.052035 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.054885 kubelet[2735]: E0117 00:24:16.052572 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.054885 kubelet[2735]: W0117 00:24:16.052588 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.054885 kubelet[2735]: E0117 00:24:16.052937 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.062438 kubelet[2735]: E0117 00:24:16.058059 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.062438 kubelet[2735]: W0117 00:24:16.058107 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.062438 kubelet[2735]: E0117 00:24:16.058365 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.062438 kubelet[2735]: E0117 00:24:16.060321 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.062438 kubelet[2735]: W0117 00:24:16.060336 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.062438 kubelet[2735]: E0117 00:24:16.060498 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.065361 kubelet[2735]: E0117 00:24:16.063823 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.065361 kubelet[2735]: W0117 00:24:16.063839 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.065361 kubelet[2735]: E0117 00:24:16.063859 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.065361 kubelet[2735]: E0117 00:24:16.064544 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.065361 kubelet[2735]: W0117 00:24:16.064559 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.065361 kubelet[2735]: E0117 00:24:16.064701 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.085683 kubelet[2735]: E0117 00:24:16.085657 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:16.085949 kubelet[2735]: W0117 00:24:16.085873 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:16.085949 kubelet[2735]: E0117 00:24:16.085906 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:16.120435 kubelet[2735]: E0117 00:24:16.117207 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:16.120579 containerd[1603]: time="2026-01-17T00:24:16.118850721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599c497958-77q6w,Uid:304ba865-efb9-4057-8232-9f07d40eb58d,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:16.231113 containerd[1603]: time="2026-01-17T00:24:16.230560013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:24:16.231113 containerd[1603]: time="2026-01-17T00:24:16.230636596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:24:16.231113 containerd[1603]: time="2026-01-17T00:24:16.230656964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:16.231113 containerd[1603]: time="2026-01-17T00:24:16.230832261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:16.255118 kubelet[2735]: E0117 00:24:16.243343 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:16.255297 containerd[1603]: time="2026-01-17T00:24:16.244014281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p84bc,Uid:92d83a58-8c44-4724-83b0-262810f46887,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:16.369838 containerd[1603]: time="2026-01-17T00:24:16.369607431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:24:16.371485 containerd[1603]: time="2026-01-17T00:24:16.370296129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:24:16.371485 containerd[1603]: time="2026-01-17T00:24:16.370326817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:16.371485 containerd[1603]: time="2026-01-17T00:24:16.370459244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:16.496398 containerd[1603]: time="2026-01-17T00:24:16.496355047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599c497958-77q6w,Uid:304ba865-efb9-4057-8232-9f07d40eb58d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c02293587268901c4df6472d3628e1b2c75f43075dd097c24c1defeb8214919\"" Jan 17 00:24:16.517600 kubelet[2735]: E0117 00:24:16.517493 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:16.576675 containerd[1603]: time="2026-01-17T00:24:16.576633599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:24:16.614142 containerd[1603]: time="2026-01-17T00:24:16.614019869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p84bc,Uid:92d83a58-8c44-4724-83b0-262810f46887,Namespace:calico-system,Attempt:0,} returns sandbox id \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\"" Jan 17 00:24:16.626697 kubelet[2735]: E0117 00:24:16.625848 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:17.433667 kubelet[2735]: E0117 00:24:17.433200 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:17.602411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266040875.mount: Deactivated successfully. Jan 17 00:24:19.319360 containerd[1603]: time="2026-01-17T00:24:19.319199499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:19.324195 containerd[1603]: time="2026-01-17T00:24:19.323964111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:24:19.324344 containerd[1603]: time="2026-01-17T00:24:19.324302744Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:19.330654 containerd[1603]: time="2026-01-17T00:24:19.330518160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:19.333274 containerd[1603]: time="2026-01-17T00:24:19.333039630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.755308139s" Jan 17 00:24:19.333609 containerd[1603]: time="2026-01-17T00:24:19.333428567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:24:19.335421 containerd[1603]: time="2026-01-17T00:24:19.335384916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:24:19.365365 containerd[1603]: time="2026-01-17T00:24:19.365080284Z" level=info msg="CreateContainer within sandbox \"0c02293587268901c4df6472d3628e1b2c75f43075dd097c24c1defeb8214919\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:24:19.396299 containerd[1603]: time="2026-01-17T00:24:19.396116420Z" level=info msg="CreateContainer within sandbox \"0c02293587268901c4df6472d3628e1b2c75f43075dd097c24c1defeb8214919\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"af167b79f08a2acf07d6c7e99e6f343dcabd2765cc665c53daab99bf877abaf0\"" Jan 17 00:24:19.398940 containerd[1603]: time="2026-01-17T00:24:19.397857383Z" level=info msg="StartContainer for \"af167b79f08a2acf07d6c7e99e6f343dcabd2765cc665c53daab99bf877abaf0\"" Jan 17 00:24:19.431289 kubelet[2735]: E0117 00:24:19.429864 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:19.630945 containerd[1603]: time="2026-01-17T00:24:19.630686007Z" level=info msg="StartContainer for \"af167b79f08a2acf07d6c7e99e6f343dcabd2765cc665c53daab99bf877abaf0\" returns successfully" Jan 17 00:24:19.680435 kubelet[2735]: E0117 00:24:19.679750 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:19.711524 kubelet[2735]: E0117 00:24:19.710968 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.711524 kubelet[2735]: W0117 00:24:19.710999 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.711524 kubelet[2735]: E0117 00:24:19.711029 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.713593 kubelet[2735]: E0117 00:24:19.711673 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.713593 kubelet[2735]: W0117 00:24:19.711688 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.713593 kubelet[2735]: E0117 00:24:19.711705 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.713593 kubelet[2735]: E0117 00:24:19.712492 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.713593 kubelet[2735]: W0117 00:24:19.712505 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.713593 kubelet[2735]: E0117 00:24:19.712519 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.716700 kubelet[2735]: E0117 00:24:19.714380 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.716700 kubelet[2735]: W0117 00:24:19.714393 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.716700 kubelet[2735]: E0117 00:24:19.714406 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.716700 kubelet[2735]: E0117 00:24:19.715075 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.716700 kubelet[2735]: W0117 00:24:19.715089 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.716700 kubelet[2735]: E0117 00:24:19.715102 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.717201 kubelet[2735]: E0117 00:24:19.717060 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.717201 kubelet[2735]: W0117 00:24:19.717077 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.717201 kubelet[2735]: E0117 00:24:19.717090 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.718048 kubelet[2735]: E0117 00:24:19.717571 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.718048 kubelet[2735]: W0117 00:24:19.717586 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.718048 kubelet[2735]: E0117 00:24:19.717599 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.719220 kubelet[2735]: E0117 00:24:19.718987 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.719220 kubelet[2735]: W0117 00:24:19.719001 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.719220 kubelet[2735]: E0117 00:24:19.719091 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.720865 kubelet[2735]: E0117 00:24:19.720810 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.720865 kubelet[2735]: W0117 00:24:19.720845 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.720865 kubelet[2735]: E0117 00:24:19.720858 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.722132 kubelet[2735]: E0117 00:24:19.721842 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.722132 kubelet[2735]: W0117 00:24:19.721910 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.722132 kubelet[2735]: E0117 00:24:19.721924 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.722816 kubelet[2735]: E0117 00:24:19.722801 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.722928 kubelet[2735]: W0117 00:24:19.722870 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.722928 kubelet[2735]: E0117 00:24:19.722883 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.724052 kubelet[2735]: E0117 00:24:19.723985 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.724052 kubelet[2735]: W0117 00:24:19.723996 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.724052 kubelet[2735]: E0117 00:24:19.724006 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.724485 kubelet[2735]: E0117 00:24:19.724371 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.724485 kubelet[2735]: W0117 00:24:19.724382 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.724485 kubelet[2735]: E0117 00:24:19.724393 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.726661 kubelet[2735]: E0117 00:24:19.725560 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.726661 kubelet[2735]: W0117 00:24:19.726505 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.726661 kubelet[2735]: E0117 00:24:19.726522 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.728371 kubelet[2735]: E0117 00:24:19.728338 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.728371 kubelet[2735]: W0117 00:24:19.728355 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.728371 kubelet[2735]: E0117 00:24:19.728367 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.757641 kubelet[2735]: I0117 00:24:19.757464 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599c497958-77q6w" podStartSLOduration=1.994075697 podStartE2EDuration="4.757416484s" podCreationTimestamp="2026-01-17 00:24:15 +0000 UTC" firstStartedPulling="2026-01-17 00:24:16.571536899 +0000 UTC m=+37.786491011" lastFinishedPulling="2026-01-17 00:24:19.334877686 +0000 UTC m=+40.549831798" observedRunningTime="2026-01-17 00:24:19.756197465 +0000 UTC m=+40.971151588" watchObservedRunningTime="2026-01-17 00:24:19.757416484 +0000 UTC m=+40.972370606" Jan 17 00:24:19.808119 kubelet[2735]: E0117 00:24:19.806980 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.808119 kubelet[2735]: W0117 00:24:19.807004 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.808119 kubelet[2735]: E0117 00:24:19.807106 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.808438 kubelet[2735]: E0117 00:24:19.808339 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.808438 kubelet[2735]: W0117 00:24:19.808352 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.808522 kubelet[2735]: E0117 00:24:19.808447 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.810665 kubelet[2735]: E0117 00:24:19.809324 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.810665 kubelet[2735]: W0117 00:24:19.809338 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.810665 kubelet[2735]: E0117 00:24:19.809431 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.810665 kubelet[2735]: E0117 00:24:19.810314 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.810665 kubelet[2735]: W0117 00:24:19.810325 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.811097 kubelet[2735]: E0117 00:24:19.810694 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.813272 kubelet[2735]: E0117 00:24:19.811909 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.813272 kubelet[2735]: W0117 00:24:19.811926 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.813272 kubelet[2735]: E0117 00:24:19.812137 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.813272 kubelet[2735]: E0117 00:24:19.813047 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.813272 kubelet[2735]: W0117 00:24:19.813061 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.813511 kubelet[2735]: E0117 00:24:19.813361 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.814981 kubelet[2735]: E0117 00:24:19.814406 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.814981 kubelet[2735]: W0117 00:24:19.814419 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.814981 kubelet[2735]: E0117 00:24:19.814527 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.817292 kubelet[2735]: E0117 00:24:19.815327 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.817292 kubelet[2735]: W0117 00:24:19.815425 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.817292 kubelet[2735]: E0117 00:24:19.815542 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.817292 kubelet[2735]: E0117 00:24:19.816447 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.817292 kubelet[2735]: W0117 00:24:19.816459 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.817292 kubelet[2735]: E0117 00:24:19.816878 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.818394 kubelet[2735]: E0117 00:24:19.817708 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.818394 kubelet[2735]: W0117 00:24:19.818352 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.820304 kubelet[2735]: E0117 00:24:19.819333 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.820304 kubelet[2735]: E0117 00:24:19.820104 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.820304 kubelet[2735]: W0117 00:24:19.820115 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.820546 kubelet[2735]: E0117 00:24:19.820220 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.822145 kubelet[2735]: E0117 00:24:19.821156 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.822145 kubelet[2735]: W0117 00:24:19.821170 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.822145 kubelet[2735]: E0117 00:24:19.821541 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.822145 kubelet[2735]: E0117 00:24:19.821880 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.822145 kubelet[2735]: W0117 00:24:19.821891 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.822145 kubelet[2735]: E0117 00:24:19.822022 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.822734 kubelet[2735]: E0117 00:24:19.822580 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.822734 kubelet[2735]: W0117 00:24:19.822592 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.822734 kubelet[2735]: E0117 00:24:19.822695 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.824203 kubelet[2735]: E0117 00:24:19.824142 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.824394 kubelet[2735]: W0117 00:24:19.824305 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.824755 kubelet[2735]: E0117 00:24:19.824453 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.825583 kubelet[2735]: E0117 00:24:19.825569 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.825870 kubelet[2735]: W0117 00:24:19.825709 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.825870 kubelet[2735]: E0117 00:24:19.825724 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.826692 kubelet[2735]: E0117 00:24:19.826679 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.827071 kubelet[2735]: W0117 00:24:19.826843 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.827071 kubelet[2735]: E0117 00:24:19.826859 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:19.828864 kubelet[2735]: E0117 00:24:19.828548 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:19.828864 kubelet[2735]: W0117 00:24:19.828562 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:19.828864 kubelet[2735]: E0117 00:24:19.828573 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.516612 containerd[1603]: time="2026-01-17T00:24:20.515598853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:20.520801 containerd[1603]: time="2026-01-17T00:24:20.519519501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:24:20.524708 containerd[1603]: time="2026-01-17T00:24:20.523025646Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:20.529656 containerd[1603]: time="2026-01-17T00:24:20.529586948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:20.530882 containerd[1603]: time="2026-01-17T00:24:20.530745453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.195316385s" Jan 17 00:24:20.530882 containerd[1603]: time="2026-01-17T00:24:20.530863673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:24:20.538909 containerd[1603]: time="2026-01-17T00:24:20.538729182Z" level=info msg="CreateContainer within sandbox \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:24:20.580412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692948624.mount: Deactivated successfully. Jan 17 00:24:20.590375 containerd[1603]: time="2026-01-17T00:24:20.590114337Z" level=info msg="CreateContainer within sandbox \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e\"" Jan 17 00:24:20.593050 containerd[1603]: time="2026-01-17T00:24:20.591732514Z" level=info msg="StartContainer for \"eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e\"" Jan 17 00:24:20.690731 kubelet[2735]: E0117 00:24:20.690701 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:20.740891 kubelet[2735]: E0117 00:24:20.740819 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.740891 kubelet[2735]: W0117 00:24:20.740863 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.740891 kubelet[2735]: E0117 00:24:20.740887 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.742006 kubelet[2735]: E0117 00:24:20.741329 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.742006 kubelet[2735]: W0117 00:24:20.741342 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.742006 kubelet[2735]: E0117 00:24:20.741356 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.742006 kubelet[2735]: E0117 00:24:20.741743 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.742006 kubelet[2735]: W0117 00:24:20.741755 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.742006 kubelet[2735]: E0117 00:24:20.741803 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.745015 kubelet[2735]: E0117 00:24:20.743088 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.745015 kubelet[2735]: W0117 00:24:20.743102 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.745015 kubelet[2735]: E0117 00:24:20.743115 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.745468 kubelet[2735]: E0117 00:24:20.745304 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.745468 kubelet[2735]: W0117 00:24:20.745318 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.745468 kubelet[2735]: E0117 00:24:20.745331 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.748490 kubelet[2735]: E0117 00:24:20.745888 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.748573 kubelet[2735]: W0117 00:24:20.748559 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.748740 kubelet[2735]: E0117 00:24:20.748725 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.752438 kubelet[2735]: E0117 00:24:20.752336 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.753014 kubelet[2735]: W0117 00:24:20.752840 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.753014 kubelet[2735]: E0117 00:24:20.752860 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.754352 kubelet[2735]: E0117 00:24:20.753895 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.754614 kubelet[2735]: W0117 00:24:20.754483 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.754614 kubelet[2735]: E0117 00:24:20.754502 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.756063 kubelet[2735]: E0117 00:24:20.755985 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.756063 kubelet[2735]: W0117 00:24:20.756030 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.756063 kubelet[2735]: E0117 00:24:20.756045 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.758380 kubelet[2735]: E0117 00:24:20.758288 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.758380 kubelet[2735]: W0117 00:24:20.758303 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.758380 kubelet[2735]: E0117 00:24:20.758318 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.758839 kubelet[2735]: E0117 00:24:20.758669 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.758839 kubelet[2735]: W0117 00:24:20.758706 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.758839 kubelet[2735]: E0117 00:24:20.758719 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.759298 kubelet[2735]: E0117 00:24:20.759143 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.759298 kubelet[2735]: W0117 00:24:20.759157 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.759298 kubelet[2735]: E0117 00:24:20.759168 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.760398 kubelet[2735]: E0117 00:24:20.760353 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.760398 kubelet[2735]: W0117 00:24:20.760367 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.760398 kubelet[2735]: E0117 00:24:20.760379 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.761310 kubelet[2735]: E0117 00:24:20.760677 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.761310 kubelet[2735]: W0117 00:24:20.760690 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.761310 kubelet[2735]: E0117 00:24:20.760703 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.761310 kubelet[2735]: E0117 00:24:20.761056 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:24:20.761310 kubelet[2735]: W0117 00:24:20.761157 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:24:20.761310 kubelet[2735]: E0117 00:24:20.761171 2735 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:24:20.796048 containerd[1603]: time="2026-01-17T00:24:20.795846633Z" level=info msg="StartContainer for \"eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e\" returns successfully" Jan 17 00:24:20.912578 containerd[1603]: time="2026-01-17T00:24:20.909154630Z" level=info msg="shim disconnected" id=eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e namespace=k8s.io Jan 17 00:24:20.912578 containerd[1603]: time="2026-01-17T00:24:20.912110528Z" level=warning msg="cleaning up after shim disconnected" id=eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e namespace=k8s.io Jan 17 00:24:20.912578 containerd[1603]: time="2026-01-17T00:24:20.912133220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:21.356593 systemd[1]: run-containerd-runc-k8s.io-eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e-runc.cu99ND.mount: Deactivated successfully. Jan 17 00:24:21.357166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eefb88e0809d32a239bdd104d087ef993cb03e4b330a393b0994012de2e5270e-rootfs.mount: Deactivated successfully. Jan 17 00:24:21.432389 kubelet[2735]: E0117 00:24:21.432109 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:21.713616 kubelet[2735]: E0117 00:24:21.713406 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:21.718632 kubelet[2735]: E0117 00:24:21.716354 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:21.719757 containerd[1603]: time="2026-01-17T00:24:21.717192993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:24:23.428461 kubelet[2735]: E0117 00:24:23.426535 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:25.433502 kubelet[2735]: E0117 00:24:25.433201 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:27.429314 kubelet[2735]: E0117 00:24:27.425163 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:28.582586 containerd[1603]: time="2026-01-17T00:24:28.581888749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:28.584026 containerd[1603]: time="2026-01-17T00:24:28.583743033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:24:28.586336 containerd[1603]: time="2026-01-17T00:24:28.586177684Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:28.593974 containerd[1603]: time="2026-01-17T00:24:28.592721802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:28.596155 containerd[1603]: time="2026-01-17T00:24:28.596032127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.878799821s" Jan 17 00:24:28.596155 containerd[1603]: time="2026-01-17T00:24:28.596111707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:24:28.607544 containerd[1603]: time="2026-01-17T00:24:28.607387467Z" level=info msg="CreateContainer within sandbox \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:24:28.678845 containerd[1603]: time="2026-01-17T00:24:28.678712663Z" level=info msg="CreateContainer within sandbox \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6a066dcb2f4f5895f100c8d29fbbd3cef38733f6bacadd097ab26ba2e0ed6819\"" Jan 17 00:24:28.683441 containerd[1603]: time="2026-01-17T00:24:28.683208670Z" level=info msg="StartContainer for \"6a066dcb2f4f5895f100c8d29fbbd3cef38733f6bacadd097ab26ba2e0ed6819\"" Jan 17 00:24:29.028786 containerd[1603]: time="2026-01-17T00:24:29.028611110Z" level=info msg="StartContainer for \"6a066dcb2f4f5895f100c8d29fbbd3cef38733f6bacadd097ab26ba2e0ed6819\" returns successfully" Jan 17 00:24:29.434734 kubelet[2735]: E0117 00:24:29.434505 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:29.800950 kubelet[2735]: E0117 00:24:29.799880 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:30.822979 kubelet[2735]: E0117 00:24:30.819871 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:31.427395 kubelet[2735]: E0117 00:24:31.425424 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:31.439656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a066dcb2f4f5895f100c8d29fbbd3cef38733f6bacadd097ab26ba2e0ed6819-rootfs.mount: Deactivated successfully. Jan 17 00:24:31.478139 kubelet[2735]: I0117 00:24:31.474538 2735 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:24:31.489919 containerd[1603]: time="2026-01-17T00:24:31.489463030Z" level=info msg="shim disconnected" id=6a066dcb2f4f5895f100c8d29fbbd3cef38733f6bacadd097ab26ba2e0ed6819 namespace=k8s.io Jan 17 00:24:31.489919 containerd[1603]: time="2026-01-17T00:24:31.489569820Z" level=warning msg="cleaning up after shim disconnected" id=6a066dcb2f4f5895f100c8d29fbbd3cef38733f6bacadd097ab26ba2e0ed6819 namespace=k8s.io Jan 17 00:24:31.489919 containerd[1603]: time="2026-01-17T00:24:31.489588895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:31.653615 kubelet[2735]: W0117 00:24:31.650108 2735 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 17 00:24:31.653615 kubelet[2735]: E0117 00:24:31.650169 2735 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 17 00:24:31.745691 kubelet[2735]: I0117 00:24:31.743870 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16ee4324-8757-4618-9329-530899bfb3f8-calico-apiserver-certs\") pod \"calico-apiserver-575b9f78b6-2wpqn\" (UID: \"16ee4324-8757-4618-9329-530899bfb3f8\") " pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" Jan 17 00:24:31.745691 kubelet[2735]: I0117 00:24:31.743932 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-backend-key-pair\") pod \"whisker-854f7549c5-c67l8\" (UID: \"f11c9c9b-8649-4722-8078-c2e7af59dd81\") " pod="calico-system/whisker-854f7549c5-c67l8" Jan 17 00:24:31.745691 kubelet[2735]: I0117 00:24:31.743960 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b773dda6-1d12-466d-8ab6-e9b4e6b1277a-goldmane-ca-bundle\") pod \"goldmane-666569f655-mdmw8\" (UID: \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\") " pod="calico-system/goldmane-666569f655-mdmw8" Jan 17 00:24:31.745691 kubelet[2735]: I0117 00:24:31.743995 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c-config-volume\") pod \"coredns-668d6bf9bc-glqqh\" (UID: \"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c\") " pod="kube-system/coredns-668d6bf9bc-glqqh" Jan 17 00:24:31.745691 kubelet[2735]: I0117 00:24:31.744025 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7904b8c1-aed5-4856-a748-a81b4e03c215-tigera-ca-bundle\") pod \"calico-kube-controllers-6c64f7b875-k79d8\" (UID: \"7904b8c1-aed5-4856-a748-a81b4e03c215\") " pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" Jan 17 00:24:31.746015 kubelet[2735]: I0117 00:24:31.744203 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9x79\" (UniqueName: \"kubernetes.io/projected/f11c9c9b-8649-4722-8078-c2e7af59dd81-kube-api-access-f9x79\") pod \"whisker-854f7549c5-c67l8\" (UID: \"f11c9c9b-8649-4722-8078-c2e7af59dd81\") " pod="calico-system/whisker-854f7549c5-c67l8" Jan 17 00:24:31.746015 kubelet[2735]: I0117 00:24:31.744453 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjmd\" (UniqueName: \"kubernetes.io/projected/cd16aa39-f128-48b4-a7b5-ac9f06328314-kube-api-access-7zjmd\") pod \"coredns-668d6bf9bc-s8cxw\" (UID: \"cd16aa39-f128-48b4-a7b5-ac9f06328314\") " pod="kube-system/coredns-668d6bf9bc-s8cxw" Jan 17 00:24:31.746015 kubelet[2735]: I0117 00:24:31.744486 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvk2c\" (UniqueName: \"kubernetes.io/projected/388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c-kube-api-access-kvk2c\") pod \"coredns-668d6bf9bc-glqqh\" (UID: \"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c\") " pod="kube-system/coredns-668d6bf9bc-glqqh" Jan 17 00:24:31.746015 kubelet[2735]: I0117 00:24:31.744511 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b773dda6-1d12-466d-8ab6-e9b4e6b1277a-goldmane-key-pair\") pod \"goldmane-666569f655-mdmw8\" (UID: \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\") " pod="calico-system/goldmane-666569f655-mdmw8" Jan 17 00:24:31.746015 kubelet[2735]: I0117 00:24:31.744537 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd16aa39-f128-48b4-a7b5-ac9f06328314-config-volume\") pod \"coredns-668d6bf9bc-s8cxw\" (UID: \"cd16aa39-f128-48b4-a7b5-ac9f06328314\") " pod="kube-system/coredns-668d6bf9bc-s8cxw" Jan 17 00:24:31.746195 kubelet[2735]: I0117 00:24:31.744639 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7c7\" (UniqueName: \"kubernetes.io/projected/7904b8c1-aed5-4856-a748-a81b4e03c215-kube-api-access-zt7c7\") pod \"calico-kube-controllers-6c64f7b875-k79d8\" (UID: \"7904b8c1-aed5-4856-a748-a81b4e03c215\") " pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" Jan 17 00:24:31.746195 kubelet[2735]: I0117 00:24:31.744674 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9hc\" (UniqueName: \"kubernetes.io/projected/16ee4324-8757-4618-9329-530899bfb3f8-kube-api-access-2r9hc\") pod \"calico-apiserver-575b9f78b6-2wpqn\" (UID: \"16ee4324-8757-4618-9329-530899bfb3f8\") " pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" Jan 17 00:24:31.746195 kubelet[2735]: I0117 00:24:31.744700 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b773dda6-1d12-466d-8ab6-e9b4e6b1277a-config\") pod \"goldmane-666569f655-mdmw8\" (UID: \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\") " pod="calico-system/goldmane-666569f655-mdmw8" Jan 17 00:24:31.746195 kubelet[2735]: I0117 00:24:31.744722 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvxt\" (UniqueName: \"kubernetes.io/projected/b773dda6-1d12-466d-8ab6-e9b4e6b1277a-kube-api-access-hmvxt\") pod \"goldmane-666569f655-mdmw8\" (UID: \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\") " pod="calico-system/goldmane-666569f655-mdmw8" Jan 17 00:24:31.746195 kubelet[2735]: I0117 00:24:31.744746 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-ca-bundle\") pod \"whisker-854f7549c5-c67l8\" (UID: \"f11c9c9b-8649-4722-8078-c2e7af59dd81\") " pod="calico-system/whisker-854f7549c5-c67l8" Jan 17 00:24:31.746572 kubelet[2735]: I0117 00:24:31.744773 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3787079-d3c5-4000-91a5-36b644436b7f-calico-apiserver-certs\") pod \"calico-apiserver-575b9f78b6-fb2xv\" (UID: \"e3787079-d3c5-4000-91a5-36b644436b7f\") " pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" Jan 17 00:24:31.746572 kubelet[2735]: I0117 00:24:31.744853 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9wrr\" (UniqueName: \"kubernetes.io/projected/e3787079-d3c5-4000-91a5-36b644436b7f-kube-api-access-l9wrr\") pod \"calico-apiserver-575b9f78b6-fb2xv\" (UID: \"e3787079-d3c5-4000-91a5-36b644436b7f\") " pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" Jan 17 00:24:31.832019 kubelet[2735]: E0117 00:24:31.830802 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:31.848143 containerd[1603]: time="2026-01-17T00:24:31.847342868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:24:31.980373 containerd[1603]: time="2026-01-17T00:24:31.980031557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c64f7b875-k79d8,Uid:7904b8c1-aed5-4856-a748-a81b4e03c215,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:31.991115 containerd[1603]: time="2026-01-17T00:24:31.991040326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-2wpqn,Uid:16ee4324-8757-4618-9329-530899bfb3f8,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:24:32.031772 containerd[1603]: time="2026-01-17T00:24:32.031594595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-fb2xv,Uid:e3787079-d3c5-4000-91a5-36b644436b7f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:24:32.064704 containerd[1603]: time="2026-01-17T00:24:32.063279023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-854f7549c5-c67l8,Uid:f11c9c9b-8649-4722-8078-c2e7af59dd81,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:32.234204 containerd[1603]: time="2026-01-17T00:24:32.234029695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mdmw8,Uid:b773dda6-1d12-466d-8ab6-e9b4e6b1277a,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:32.870981 kubelet[2735]: E0117 00:24:32.868158 2735 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:24:32.870981 kubelet[2735]: E0117 00:24:32.868332 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c-config-volume podName:388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c nodeName:}" failed. No retries permitted until 2026-01-17 00:24:33.368307061 +0000 UTC m=+54.583261163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c-config-volume") pod "coredns-668d6bf9bc-glqqh" (UID: "388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:24:32.890350 kubelet[2735]: E0117 00:24:32.880788 2735 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:24:32.896540 kubelet[2735]: E0117 00:24:32.895084 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd16aa39-f128-48b4-a7b5-ac9f06328314-config-volume podName:cd16aa39-f128-48b4-a7b5-ac9f06328314 nodeName:}" failed. No retries permitted until 2026-01-17 00:24:33.394206257 +0000 UTC m=+54.609160369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cd16aa39-f128-48b4-a7b5-ac9f06328314-config-volume") pod "coredns-668d6bf9bc-s8cxw" (UID: "cd16aa39-f128-48b4-a7b5-ac9f06328314") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:24:33.040935 containerd[1603]: time="2026-01-17T00:24:33.038400933Z" level=error msg="Failed to destroy network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.045402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e-shm.mount: Deactivated successfully. Jan 17 00:24:33.065856 containerd[1603]: time="2026-01-17T00:24:33.065329226Z" level=error msg="encountered an error cleaning up failed sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.065856 containerd[1603]: time="2026-01-17T00:24:33.065430856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-854f7549c5-c67l8,Uid:f11c9c9b-8649-4722-8078-c2e7af59dd81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.066637 kubelet[2735]: E0117 00:24:33.066447 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.066724 kubelet[2735]: E0117 00:24:33.066676 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-854f7549c5-c67l8" Jan 17 00:24:33.066724 kubelet[2735]: E0117 00:24:33.066715 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-854f7549c5-c67l8" Jan 17 00:24:33.067028 kubelet[2735]: E0117 00:24:33.066903 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-854f7549c5-c67l8_calico-system(f11c9c9b-8649-4722-8078-c2e7af59dd81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-854f7549c5-c67l8_calico-system(f11c9c9b-8649-4722-8078-c2e7af59dd81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-854f7549c5-c67l8" podUID="f11c9c9b-8649-4722-8078-c2e7af59dd81" Jan 17 00:24:33.107789 containerd[1603]: time="2026-01-17T00:24:33.099731954Z" level=error msg="Failed to destroy network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.123616 containerd[1603]: time="2026-01-17T00:24:33.123217834Z" level=error msg="encountered an error cleaning up failed sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.123616 containerd[1603]: time="2026-01-17T00:24:33.123346034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-2wpqn,Uid:16ee4324-8757-4618-9329-530899bfb3f8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.126537 kubelet[2735]: E0117 00:24:33.125653 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.130536 kubelet[2735]: E0117 00:24:33.130387 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" Jan 17 00:24:33.134681 kubelet[2735]: E0117 00:24:33.130637 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" Jan 17 00:24:33.134681 kubelet[2735]: E0117 00:24:33.131031 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:24:33.141939 containerd[1603]: time="2026-01-17T00:24:33.141806280Z" level=error msg="Failed to destroy network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.166535 containerd[1603]: time="2026-01-17T00:24:33.166008191Z" level=error msg="encountered an error cleaning up failed sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.166535 containerd[1603]: time="2026-01-17T00:24:33.166095133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c64f7b875-k79d8,Uid:7904b8c1-aed5-4856-a748-a81b4e03c215,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.176288 kubelet[2735]: E0117 00:24:33.167504 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.176288 kubelet[2735]: E0117 00:24:33.167587 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" Jan 17 00:24:33.176288 kubelet[2735]: E0117 00:24:33.167618 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" Jan 17 00:24:33.176496 kubelet[2735]: E0117 00:24:33.167669 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:24:33.181637 containerd[1603]: time="2026-01-17T00:24:33.181404704Z" level=error msg="Failed to destroy network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.183013 containerd[1603]: time="2026-01-17T00:24:33.182974648Z" level=error msg="encountered an error cleaning up failed sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.183161 containerd[1603]: time="2026-01-17T00:24:33.183128896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-fb2xv,Uid:e3787079-d3c5-4000-91a5-36b644436b7f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.185086 kubelet[2735]: E0117 00:24:33.184758 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.185086 kubelet[2735]: E0117 00:24:33.184907 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" Jan 17 00:24:33.185086 kubelet[2735]: E0117 00:24:33.184947 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" Jan 17 00:24:33.185346 kubelet[2735]: E0117 00:24:33.185010 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:24:33.289603 containerd[1603]: time="2026-01-17T00:24:33.289493395Z" level=error msg="Failed to destroy network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.293348 containerd[1603]: time="2026-01-17T00:24:33.292047753Z" level=error msg="encountered an error cleaning up failed sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.294938 containerd[1603]: time="2026-01-17T00:24:33.293700361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mdmw8,Uid:b773dda6-1d12-466d-8ab6-e9b4e6b1277a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.295596 kubelet[2735]: E0117 00:24:33.295435 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.295596 kubelet[2735]: E0117 00:24:33.295557 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mdmw8" Jan 17 00:24:33.295728 kubelet[2735]: E0117 00:24:33.295594 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mdmw8" Jan 17 00:24:33.295728 kubelet[2735]: E0117 00:24:33.295654 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:24:33.453591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4-shm.mount: Deactivated successfully. Jan 17 00:24:33.462663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e-shm.mount: Deactivated successfully. Jan 17 00:24:33.463083 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc-shm.mount: Deactivated successfully. Jan 17 00:24:33.463419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b-shm.mount: Deactivated successfully. Jan 17 00:24:33.469506 containerd[1603]: time="2026-01-17T00:24:33.469017399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pzcck,Uid:bdf7dcb1-7f01-49ed-b25d-dd851c91e195,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:33.547451 kubelet[2735]: E0117 00:24:33.545357 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:33.556555 kubelet[2735]: E0117 00:24:33.550527 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:33.563317 containerd[1603]: time="2026-01-17T00:24:33.562124492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8cxw,Uid:cd16aa39-f128-48b4-a7b5-ac9f06328314,Namespace:kube-system,Attempt:0,}" Jan 17 00:24:33.563317 containerd[1603]: time="2026-01-17T00:24:33.562673849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glqqh,Uid:388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c,Namespace:kube-system,Attempt:0,}" Jan 17 00:24:33.723164 containerd[1603]: time="2026-01-17T00:24:33.723027991Z" level=error msg="Failed to destroy network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.724414 containerd[1603]: time="2026-01-17T00:24:33.724373385Z" level=error msg="encountered an error cleaning up failed sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.724595 containerd[1603]: time="2026-01-17T00:24:33.724558029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pzcck,Uid:bdf7dcb1-7f01-49ed-b25d-dd851c91e195,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.726548 kubelet[2735]: E0117 00:24:33.726444 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.726639 kubelet[2735]: E0117 00:24:33.726557 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:33.726639 kubelet[2735]: E0117 00:24:33.726592 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pzcck" Jan 17 00:24:33.726786 kubelet[2735]: E0117 00:24:33.726648 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:33.825647 containerd[1603]: time="2026-01-17T00:24:33.825482274Z" level=error msg="Failed to destroy network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.826659 containerd[1603]: time="2026-01-17T00:24:33.826377287Z" level=error msg="encountered an error cleaning up failed sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.826659 containerd[1603]: time="2026-01-17T00:24:33.826438451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8cxw,Uid:cd16aa39-f128-48b4-a7b5-ac9f06328314,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.828460 kubelet[2735]: E0117 00:24:33.828178 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.828548 kubelet[2735]: E0117 00:24:33.828485 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s8cxw" Jan 17 00:24:33.828548 kubelet[2735]: E0117 00:24:33.828515 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s8cxw" Jan 17 00:24:33.828609 kubelet[2735]: E0117 00:24:33.828563 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s8cxw_kube-system(cd16aa39-f128-48b4-a7b5-ac9f06328314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s8cxw_kube-system(cd16aa39-f128-48b4-a7b5-ac9f06328314)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s8cxw" podUID="cd16aa39-f128-48b4-a7b5-ac9f06328314" Jan 17 00:24:33.891342 kubelet[2735]: I0117 00:24:33.891171 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:24:33.910802 kubelet[2735]: I0117 00:24:33.910765 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:24:33.914218 containerd[1603]: time="2026-01-17T00:24:33.913380417Z" level=error msg="Failed to destroy network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.923460 containerd[1603]: time="2026-01-17T00:24:33.923179736Z" level=error msg="encountered an error cleaning up failed sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.924370 containerd[1603]: time="2026-01-17T00:24:33.924065893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glqqh,Uid:388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.926523 kubelet[2735]: I0117 00:24:33.925858 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:33.926599 kubelet[2735]: E0117 00:24:33.926548 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:33.926807 kubelet[2735]: E0117 00:24:33.926691 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-glqqh" Jan 17 00:24:33.926947 kubelet[2735]: E0117 00:24:33.926900 2735 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-glqqh" Jan 17 00:24:33.927187 kubelet[2735]: E0117 00:24:33.927047 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-glqqh_kube-system(388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-glqqh_kube-system(388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-glqqh" podUID="388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c" Jan 17 00:24:33.949539 kubelet[2735]: I0117 00:24:33.941613 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:24:33.966199 containerd[1603]: time="2026-01-17T00:24:33.966018186Z" level=info msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" Jan 17 00:24:33.966529 containerd[1603]: time="2026-01-17T00:24:33.966481891Z" level=info msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" Jan 17 00:24:33.972535 containerd[1603]: time="2026-01-17T00:24:33.972458769Z" level=info msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" Jan 17 00:24:33.975651 containerd[1603]: time="2026-01-17T00:24:33.975378464Z" level=info msg="Ensure that sandbox 6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc in task-service has been cleanup successfully" Jan 17 00:24:33.978563 containerd[1603]: time="2026-01-17T00:24:33.976645242Z" level=info msg="Ensure that sandbox fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8 in task-service has been cleanup successfully" Jan 17 00:24:33.980442 containerd[1603]: time="2026-01-17T00:24:33.976666005Z" level=info msg="Ensure that sandbox a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e in task-service has been cleanup successfully" Jan 17 00:24:33.980969 containerd[1603]: time="2026-01-17T00:24:33.978439966Z" level=info msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" Jan 17 00:24:33.981126 kubelet[2735]: I0117 00:24:33.980974 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:33.981449 containerd[1603]: time="2026-01-17T00:24:33.981424252Z" level=info msg="Ensure that sandbox 0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e in task-service has been cleanup successfully" Jan 17 00:24:33.998551 containerd[1603]: time="2026-01-17T00:24:33.994354442Z" level=info msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" Jan 17 00:24:33.999000 containerd[1603]: time="2026-01-17T00:24:33.998973468Z" level=info msg="Ensure that sandbox 7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af in task-service has been cleanup successfully" Jan 17 00:24:34.012749 kubelet[2735]: I0117 00:24:34.012518 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:34.020020 containerd[1603]: time="2026-01-17T00:24:34.013390653Z" level=info msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" Jan 17 00:24:34.020020 containerd[1603]: time="2026-01-17T00:24:34.013601868Z" level=info msg="Ensure that sandbox 1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b in task-service has been cleanup successfully" Jan 17 00:24:34.027002 kubelet[2735]: I0117 00:24:34.021365 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:34.067729 containerd[1603]: time="2026-01-17T00:24:34.065799582Z" level=info msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" Jan 17 00:24:34.067729 containerd[1603]: time="2026-01-17T00:24:34.067370412Z" level=info msg="Ensure that sandbox 9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4 in task-service has been cleanup successfully" Jan 17 00:24:34.191359 containerd[1603]: time="2026-01-17T00:24:34.190786100Z" level=error msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" failed" error="failed to destroy network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.191741 kubelet[2735]: E0117 00:24:34.191439 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:24:34.191741 kubelet[2735]: E0117 00:24:34.191535 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc"} Jan 17 00:24:34.191741 kubelet[2735]: E0117 00:24:34.191622 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16ee4324-8757-4618-9329-530899bfb3f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.191741 kubelet[2735]: E0117 00:24:34.191658 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16ee4324-8757-4618-9329-530899bfb3f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:24:34.218145 containerd[1603]: time="2026-01-17T00:24:34.218089443Z" level=error msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" failed" error="failed to destroy network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.218808 kubelet[2735]: E0117 00:24:34.218663 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:24:34.219217 containerd[1603]: time="2026-01-17T00:24:34.219148710Z" level=error msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" failed" error="failed to destroy network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.219444 kubelet[2735]: E0117 00:24:34.219190 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8"} Jan 17 00:24:34.219444 kubelet[2735]: E0117 00:24:34.219344 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.219607 kubelet[2735]: E0117 00:24:34.219443 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:34.220064 kubelet[2735]: E0117 00:24:34.220015 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:34.220659 kubelet[2735]: E0117 00:24:34.220628 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e"} Jan 17 00:24:34.220763 kubelet[2735]: E0117 00:24:34.220742 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.221077 kubelet[2735]: E0117 00:24:34.221036 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-854f7549c5-c67l8" podUID="f11c9c9b-8649-4722-8078-c2e7af59dd81" Jan 17 00:24:34.244205 containerd[1603]: time="2026-01-17T00:24:34.244067993Z" level=error msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" failed" error="failed to destroy network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.244713 kubelet[2735]: E0117 00:24:34.244661 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:34.245995 containerd[1603]: time="2026-01-17T00:24:34.245407924Z" level=error msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" failed" error="failed to destroy network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.246768 kubelet[2735]: E0117 00:24:34.246727 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af"} Jan 17 00:24:34.247130 kubelet[2735]: E0117 00:24:34.247106 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd16aa39-f128-48b4-a7b5-ac9f06328314\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.269475 kubelet[2735]: E0117 00:24:34.264740 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd16aa39-f128-48b4-a7b5-ac9f06328314\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s8cxw" podUID="cd16aa39-f128-48b4-a7b5-ac9f06328314" Jan 17 00:24:34.269475 kubelet[2735]: E0117 00:24:34.266133 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:34.269475 kubelet[2735]: E0117 00:24:34.266364 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b"} Jan 17 00:24:34.269475 kubelet[2735]: E0117 00:24:34.266456 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7904b8c1-aed5-4856-a748-a81b4e03c215\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.270911 containerd[1603]: time="2026-01-17T00:24:34.265403838Z" level=error msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" failed" error="failed to destroy network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.270911 containerd[1603]: time="2026-01-17T00:24:34.270316991Z" level=error msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" failed" error="failed to destroy network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:34.270999 kubelet[2735]: E0117 00:24:34.266558 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7904b8c1-aed5-4856-a748-a81b4e03c215\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:24:34.272586 kubelet[2735]: E0117 00:24:34.271753 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:24:34.272670 kubelet[2735]: E0117 00:24:34.272485 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e"} Jan 17 00:24:34.273103 kubelet[2735]: E0117 00:24:34.272962 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3787079-d3c5-4000-91a5-36b644436b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.273103 kubelet[2735]: E0117 00:24:34.273071 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3787079-d3c5-4000-91a5-36b644436b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:24:34.273946 kubelet[2735]: E0117 00:24:34.273014 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:34.273946 kubelet[2735]: E0117 00:24:34.273355 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4"} Jan 17 00:24:34.273946 kubelet[2735]: E0117 00:24:34.273390 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:34.273946 kubelet[2735]: E0117 00:24:34.273417 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:24:34.439636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af-shm.mount: Deactivated successfully. Jan 17 00:24:34.439934 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8-shm.mount: Deactivated successfully. Jan 17 00:24:35.027934 kubelet[2735]: I0117 00:24:35.025590 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:24:35.028708 containerd[1603]: time="2026-01-17T00:24:35.026754929Z" level=info msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" Jan 17 00:24:35.034590 containerd[1603]: time="2026-01-17T00:24:35.034145470Z" level=info msg="Ensure that sandbox e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51 in task-service has been cleanup successfully" Jan 17 00:24:35.191708 containerd[1603]: time="2026-01-17T00:24:35.188722050Z" level=error msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" failed" error="failed to destroy network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:35.193130 kubelet[2735]: E0117 00:24:35.189045 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:24:35.193130 kubelet[2735]: E0117 00:24:35.189109 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51"} Jan 17 00:24:35.193130 kubelet[2735]: E0117 00:24:35.190334 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:35.193130 kubelet[2735]: E0117 00:24:35.190378 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-glqqh" podUID="388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c" Jan 17 00:24:45.428495 containerd[1603]: time="2026-01-17T00:24:45.427967926Z" level=info msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" Jan 17 00:24:45.529794 containerd[1603]: time="2026-01-17T00:24:45.529645129Z" level=error msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" failed" error="failed to destroy network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:45.530142 kubelet[2735]: E0117 00:24:45.530042 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:45.530142 kubelet[2735]: E0117 00:24:45.530125 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b"} Jan 17 00:24:45.530807 kubelet[2735]: E0117 00:24:45.530175 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7904b8c1-aed5-4856-a748-a81b4e03c215\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:45.530807 kubelet[2735]: E0117 00:24:45.530208 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7904b8c1-aed5-4856-a748-a81b4e03c215\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:24:46.427601 containerd[1603]: time="2026-01-17T00:24:46.427479877Z" level=info msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" Jan 17 00:24:46.516597 containerd[1603]: time="2026-01-17T00:24:46.514763901Z" level=error msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" failed" error="failed to destroy network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:46.518402 kubelet[2735]: E0117 00:24:46.516281 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:46.518402 kubelet[2735]: E0117 00:24:46.516354 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af"} Jan 17 00:24:46.518402 kubelet[2735]: E0117 00:24:46.516407 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd16aa39-f128-48b4-a7b5-ac9f06328314\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:46.518402 kubelet[2735]: E0117 00:24:46.516443 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd16aa39-f128-48b4-a7b5-ac9f06328314\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s8cxw" podUID="cd16aa39-f128-48b4-a7b5-ac9f06328314" Jan 17 00:24:47.433394 containerd[1603]: time="2026-01-17T00:24:47.433043111Z" level=info msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" Jan 17 00:24:47.434847 containerd[1603]: time="2026-01-17T00:24:47.434033679Z" level=info msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" Jan 17 00:24:47.552101 containerd[1603]: time="2026-01-17T00:24:47.550591030Z" level=error msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" failed" error="failed to destroy network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:47.552101 containerd[1603]: time="2026-01-17T00:24:47.551049536Z" level=error msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" failed" error="failed to destroy network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:47.552811 kubelet[2735]: E0117 00:24:47.551528 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:24:47.552811 kubelet[2735]: E0117 00:24:47.551606 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e"} Jan 17 00:24:47.552811 kubelet[2735]: E0117 00:24:47.551661 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3787079-d3c5-4000-91a5-36b644436b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:47.552811 kubelet[2735]: E0117 00:24:47.551700 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3787079-d3c5-4000-91a5-36b644436b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:24:47.553466 kubelet[2735]: E0117 00:24:47.552034 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:47.553466 kubelet[2735]: E0117 00:24:47.552381 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4"} Jan 17 00:24:47.553466 kubelet[2735]: E0117 00:24:47.552525 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:47.553466 kubelet[2735]: E0117 00:24:47.552671 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b773dda6-1d12-466d-8ab6-e9b4e6b1277a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:24:48.430380 containerd[1603]: time="2026-01-17T00:24:48.429521735Z" level=info msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" Jan 17 00:24:48.431862 containerd[1603]: time="2026-01-17T00:24:48.430683526Z" level=info msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" Jan 17 00:24:48.431862 containerd[1603]: time="2026-01-17T00:24:48.431449197Z" level=info msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" Jan 17 00:24:48.535316 containerd[1603]: time="2026-01-17T00:24:48.534760252Z" level=error msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" failed" error="failed to destroy network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:48.535316 containerd[1603]: time="2026-01-17T00:24:48.535123731Z" level=error msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" failed" error="failed to destroy network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:48.535511 kubelet[2735]: E0117 00:24:48.535434 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:24:48.535511 kubelet[2735]: E0117 00:24:48.535490 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc"} Jan 17 00:24:48.535599 kubelet[2735]: E0117 00:24:48.535531 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16ee4324-8757-4618-9329-530899bfb3f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:48.535599 kubelet[2735]: E0117 00:24:48.535561 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16ee4324-8757-4618-9329-530899bfb3f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:24:48.535830 kubelet[2735]: E0117 00:24:48.535596 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:48.535830 kubelet[2735]: E0117 00:24:48.535617 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e"} Jan 17 00:24:48.535830 kubelet[2735]: E0117 00:24:48.535778 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:48.535830 kubelet[2735]: E0117 00:24:48.535811 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-854f7549c5-c67l8" podUID="f11c9c9b-8649-4722-8078-c2e7af59dd81" Jan 17 00:24:48.542418 containerd[1603]: time="2026-01-17T00:24:48.541976519Z" level=error msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" failed" error="failed to destroy network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:48.542842 kubelet[2735]: E0117 00:24:48.542553 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:24:48.542842 kubelet[2735]: E0117 00:24:48.542660 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8"} Jan 17 00:24:48.542842 kubelet[2735]: E0117 00:24:48.542711 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:48.542842 kubelet[2735]: E0117 00:24:48.542745 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf7dcb1-7f01-49ed-b25d-dd851c91e195\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:24:49.303078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23744703.mount: Deactivated successfully. Jan 17 00:24:49.429411 containerd[1603]: time="2026-01-17T00:24:49.429323600Z" level=info msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" Jan 17 00:24:49.594192 containerd[1603]: time="2026-01-17T00:24:49.593064867Z" level=error msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" failed" error="failed to destroy network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:24:49.594397 kubelet[2735]: E0117 00:24:49.593524 2735 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:24:49.594397 kubelet[2735]: E0117 00:24:49.593597 2735 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51"} Jan 17 00:24:49.594397 kubelet[2735]: E0117 00:24:49.593646 2735 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:24:49.594397 kubelet[2735]: E0117 00:24:49.593680 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-glqqh" podUID="388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c" Jan 17 00:24:49.601996 containerd[1603]: time="2026-01-17T00:24:49.601153314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:49.612481 containerd[1603]: time="2026-01-17T00:24:49.609192794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:24:49.619425 containerd[1603]: time="2026-01-17T00:24:49.618704019Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:49.631132 containerd[1603]: time="2026-01-17T00:24:49.629840587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:49.631132 containerd[1603]: time="2026-01-17T00:24:49.630998290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 17.783603123s" Jan 17 00:24:49.631132 containerd[1603]: time="2026-01-17T00:24:49.631032053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:24:49.674134 containerd[1603]: time="2026-01-17T00:24:49.673990626Z" level=info msg="CreateContainer within sandbox \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:24:49.724467 containerd[1603]: time="2026-01-17T00:24:49.724073322Z" level=info msg="CreateContainer within sandbox \"76b4a04b7af59ddb9cc317f401ee1a3daf080991fd072f95b9e8adbc279c28c3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3d8833480a9c650a18f2f33aec7782794ba71d29e942c88445211735182545fd\"" Jan 17 00:24:49.725279 containerd[1603]: time="2026-01-17T00:24:49.725138882Z" level=info msg="StartContainer for \"3d8833480a9c650a18f2f33aec7782794ba71d29e942c88445211735182545fd\"" Jan 17 00:24:49.920329 containerd[1603]: time="2026-01-17T00:24:49.913285095Z" level=info msg="StartContainer for \"3d8833480a9c650a18f2f33aec7782794ba71d29e942c88445211735182545fd\" returns successfully" Jan 17 00:24:50.167340 kubelet[2735]: E0117 00:24:50.166293 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:50.245004 kubelet[2735]: I0117 00:24:50.244545 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p84bc" podStartSLOduration=2.239259236 podStartE2EDuration="35.244522731s" podCreationTimestamp="2026-01-17 00:24:15 +0000 UTC" firstStartedPulling="2026-01-17 00:24:16.629094222 +0000 UTC m=+37.844048323" lastFinishedPulling="2026-01-17 00:24:49.634357715 +0000 UTC m=+70.849311818" observedRunningTime="2026-01-17 00:24:50.238640362 +0000 UTC m=+71.453594474" watchObservedRunningTime="2026-01-17 00:24:50.244522731 +0000 UTC m=+71.459476833" Jan 17 00:24:50.374349 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:24:50.374490 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:24:50.930856 containerd[1603]: time="2026-01-17T00:24:50.930668453Z" level=info msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" Jan 17 00:24:51.174581 kubelet[2735]: E0117 00:24:51.174465 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.306 [INFO][4165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.306 [INFO][4165] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" iface="eth0" netns="/var/run/netns/cni-d783d486-977e-1437-2098-d92e7ffdc326" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.311 [INFO][4165] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" iface="eth0" netns="/var/run/netns/cni-d783d486-977e-1437-2098-d92e7ffdc326" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.312 [INFO][4165] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" iface="eth0" netns="/var/run/netns/cni-d783d486-977e-1437-2098-d92e7ffdc326" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.312 [INFO][4165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.312 [INFO][4165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.897 [INFO][4190] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.899 [INFO][4190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.900 [INFO][4190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.945 [WARNING][4190] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.945 [INFO][4190] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.971 [INFO][4190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:52.001322 containerd[1603]: 2026-01-17 00:24:51.984 [INFO][4165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:24:52.011974 containerd[1603]: time="2026-01-17T00:24:52.011409352Z" level=info msg="TearDown network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" successfully" Jan 17 00:24:52.011974 containerd[1603]: time="2026-01-17T00:24:52.011452983Z" level=info msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" returns successfully" Jan 17 00:24:52.017726 systemd[1]: run-netns-cni\x2dd783d486\x2d977e\x2d1437\x2d2098\x2dd92e7ffdc326.mount: Deactivated successfully. Jan 17 00:24:52.101337 kubelet[2735]: I0117 00:24:52.101154 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-backend-key-pair\") pod \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" (UID: \"f11c9c9b-8649-4722-8078-c2e7af59dd81\") " Jan 17 00:24:52.101337 kubelet[2735]: I0117 00:24:52.101329 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9x79\" (UniqueName: \"kubernetes.io/projected/f11c9c9b-8649-4722-8078-c2e7af59dd81-kube-api-access-f9x79\") pod \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" (UID: \"f11c9c9b-8649-4722-8078-c2e7af59dd81\") " Jan 17 00:24:52.101588 kubelet[2735]: I0117 00:24:52.101384 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-ca-bundle\") pod \"f11c9c9b-8649-4722-8078-c2e7af59dd81\" (UID: \"f11c9c9b-8649-4722-8078-c2e7af59dd81\") " Jan 17 00:24:52.105501 kubelet[2735]: I0117 00:24:52.105385 2735 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f11c9c9b-8649-4722-8078-c2e7af59dd81" (UID: "f11c9c9b-8649-4722-8078-c2e7af59dd81"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:24:52.114798 kubelet[2735]: I0117 00:24:52.114712 2735 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f11c9c9b-8649-4722-8078-c2e7af59dd81" (UID: "f11c9c9b-8649-4722-8078-c2e7af59dd81"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:24:52.124479 kubelet[2735]: I0117 00:24:52.115279 2735 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11c9c9b-8649-4722-8078-c2e7af59dd81-kube-api-access-f9x79" (OuterVolumeSpecName: "kube-api-access-f9x79") pod "f11c9c9b-8649-4722-8078-c2e7af59dd81" (UID: "f11c9c9b-8649-4722-8078-c2e7af59dd81"). InnerVolumeSpecName "kube-api-access-f9x79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:24:52.116288 systemd[1]: var-lib-kubelet-pods-f11c9c9b\x2d8649\x2d4722\x2d8078\x2dc2e7af59dd81-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:24:52.124010 systemd[1]: var-lib-kubelet-pods-f11c9c9b\x2d8649\x2d4722\x2d8078\x2dc2e7af59dd81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9x79.mount: Deactivated successfully. Jan 17 00:24:52.202841 kubelet[2735]: I0117 00:24:52.202779 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 17 00:24:52.202841 kubelet[2735]: I0117 00:24:52.202840 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9x79\" (UniqueName: \"kubernetes.io/projected/f11c9c9b-8649-4722-8078-c2e7af59dd81-kube-api-access-f9x79\") on node \"localhost\" DevicePath \"\"" Jan 17 00:24:52.203571 kubelet[2735]: I0117 00:24:52.202858 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11c9c9b-8649-4722-8078-c2e7af59dd81-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 17 00:24:52.506598 kubelet[2735]: I0117 00:24:52.506508 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/089f642f-ff29-4db0-ba9f-a6e7ff0183de-whisker-ca-bundle\") pod \"whisker-5689644567-q7l8h\" (UID: \"089f642f-ff29-4db0-ba9f-a6e7ff0183de\") " pod="calico-system/whisker-5689644567-q7l8h" Jan 17 00:24:52.506598 kubelet[2735]: I0117 00:24:52.506570 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpgp\" (UniqueName: \"kubernetes.io/projected/089f642f-ff29-4db0-ba9f-a6e7ff0183de-kube-api-access-shpgp\") pod \"whisker-5689644567-q7l8h\" (UID: \"089f642f-ff29-4db0-ba9f-a6e7ff0183de\") " pod="calico-system/whisker-5689644567-q7l8h" Jan 17 00:24:52.506598 kubelet[2735]: I0117 00:24:52.506602 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/089f642f-ff29-4db0-ba9f-a6e7ff0183de-whisker-backend-key-pair\") pod \"whisker-5689644567-q7l8h\" (UID: \"089f642f-ff29-4db0-ba9f-a6e7ff0183de\") " pod="calico-system/whisker-5689644567-q7l8h" Jan 17 00:24:52.809299 containerd[1603]: time="2026-01-17T00:24:52.807505122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5689644567-q7l8h,Uid:089f642f-ff29-4db0-ba9f-a6e7ff0183de,Namespace:calico-system,Attempt:0,}" Jan 17 00:24:53.002652 kernel: bpftool[4359]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:24:53.434676 kubelet[2735]: I0117 00:24:53.434210 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11c9c9b-8649-4722-8078-c2e7af59dd81" path="/var/lib/kubelet/pods/f11c9c9b-8649-4722-8078-c2e7af59dd81/volumes" Jan 17 00:24:53.516034 systemd-networkd[1265]: calia19bf7004e4: Link UP Jan 17 00:24:53.516382 systemd-networkd[1265]: calia19bf7004e4: Gained carrier Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.098 [INFO][4343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5689644567--q7l8h-eth0 whisker-5689644567- calico-system 089f642f-ff29-4db0-ba9f-a6e7ff0183de 1063 0 2026-01-17 00:24:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5689644567 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5689644567-q7l8h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia19bf7004e4 [] [] }} ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.098 [INFO][4343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.223 [INFO][4365] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" HandleID="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Workload="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.224 [INFO][4365] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" HandleID="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Workload="localhost-k8s-whisker--5689644567--q7l8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4a20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5689644567-q7l8h", "timestamp":"2026-01-17 00:24:53.223547179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.224 [INFO][4365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.225 [INFO][4365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.225 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.271 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.308 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.341 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.378 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.389 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.389 [INFO][4365] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.403 [INFO][4365] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.435 [INFO][4365] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.479 [INFO][4365] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.479 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" host="localhost" Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.479 [INFO][4365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:53.584112 containerd[1603]: 2026-01-17 00:24:53.479 [INFO][4365] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" HandleID="k8s-pod-network.b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Workload="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.590417 containerd[1603]: 2026-01-17 00:24:53.485 [INFO][4343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5689644567--q7l8h-eth0", GenerateName:"whisker-5689644567-", Namespace:"calico-system", SelfLink:"", UID:"089f642f-ff29-4db0-ba9f-a6e7ff0183de", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5689644567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5689644567-q7l8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia19bf7004e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:24:53.590417 containerd[1603]: 2026-01-17 00:24:53.485 [INFO][4343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.590417 containerd[1603]: 2026-01-17 00:24:53.485 [INFO][4343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia19bf7004e4 ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.590417 containerd[1603]: 2026-01-17 00:24:53.526 [INFO][4343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.590417 containerd[1603]: 2026-01-17 00:24:53.527 [INFO][4343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5689644567--q7l8h-eth0", GenerateName:"whisker-5689644567-", Namespace:"calico-system", SelfLink:"", UID:"089f642f-ff29-4db0-ba9f-a6e7ff0183de", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5689644567", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd", Pod:"whisker-5689644567-q7l8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia19bf7004e4", MAC:"4e:63:25:dd:fc:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:24:53.590417 containerd[1603]: 2026-01-17 00:24:53.577 [INFO][4343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd" Namespace="calico-system" Pod="whisker-5689644567-q7l8h" WorkloadEndpoint="localhost-k8s-whisker--5689644567--q7l8h-eth0" Jan 17 00:24:53.640599 systemd-networkd[1265]: vxlan.calico: Link UP Jan 17 00:24:53.640640 systemd-networkd[1265]: vxlan.calico: Gained carrier Jan 17 00:24:53.678343 containerd[1603]: time="2026-01-17T00:24:53.677630514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:24:53.679299 containerd[1603]: time="2026-01-17T00:24:53.678567193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:24:53.679549 containerd[1603]: time="2026-01-17T00:24:53.679455523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:53.680286 containerd[1603]: time="2026-01-17T00:24:53.679920541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:53.739465 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:24:53.815148 containerd[1603]: time="2026-01-17T00:24:53.815000071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5689644567-q7l8h,Uid:089f642f-ff29-4db0-ba9f-a6e7ff0183de,Namespace:calico-system,Attempt:0,} returns sandbox id \"b11cacedd14fdc5951370e2afc40f9cc32d82d9e4b08a29f2395594d06d97ffd\"" Jan 17 00:24:53.821101 containerd[1603]: time="2026-01-17T00:24:53.820583562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:24:53.911347 containerd[1603]: time="2026-01-17T00:24:53.906550969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:53.973840 containerd[1603]: time="2026-01-17T00:24:53.913993160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:24:53.974068 containerd[1603]: time="2026-01-17T00:24:53.915190532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:24:53.974508 kubelet[2735]: E0117 00:24:53.974148 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:24:53.974508 kubelet[2735]: E0117 00:24:53.974334 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:24:53.974647 kubelet[2735]: E0117 00:24:53.974506 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f46c655c0c40418aada782a2b06c3fb5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:53.983289 containerd[1603]: time="2026-01-17T00:24:53.983067808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:24:54.095771 containerd[1603]: time="2026-01-17T00:24:54.092647184Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:54.100398 containerd[1603]: time="2026-01-17T00:24:54.100328522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:24:54.100561 containerd[1603]: time="2026-01-17T00:24:54.100383254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:24:54.100634 kubelet[2735]: E0117 00:24:54.100594 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:24:54.100688 kubelet[2735]: E0117 00:24:54.100656 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:24:54.100983 kubelet[2735]: E0117 00:24:54.100792 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:54.105089 kubelet[2735]: E0117 00:24:54.102748 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:24:54.196336 kubelet[2735]: E0117 00:24:54.196078 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:24:54.713697 systemd-networkd[1265]: vxlan.calico: Gained IPv6LL Jan 17 00:24:55.200596 kubelet[2735]: E0117 00:24:55.199872 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:24:55.286650 systemd-networkd[1265]: calia19bf7004e4: Gained IPv6LL Jan 17 00:24:57.431575 kubelet[2735]: E0117 00:24:57.429659 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:57.443754 containerd[1603]: time="2026-01-17T00:24:57.440766883Z" level=info msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.673 [INFO][4518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.678 [INFO][4518] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" iface="eth0" netns="/var/run/netns/cni-912615e2-4e10-a5bc-6d32-07b8564c3a49" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.678 [INFO][4518] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" iface="eth0" netns="/var/run/netns/cni-912615e2-4e10-a5bc-6d32-07b8564c3a49" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.679 [INFO][4518] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" iface="eth0" netns="/var/run/netns/cni-912615e2-4e10-a5bc-6d32-07b8564c3a49" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.680 [INFO][4518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.680 [INFO][4518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.823 [INFO][4527] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.823 [INFO][4527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.823 [INFO][4527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.842 [WARNING][4527] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.842 [INFO][4527] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.852 [INFO][4527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:57.872591 containerd[1603]: 2026-01-17 00:24:57.865 [INFO][4518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:24:57.881114 systemd[1]: run-netns-cni\x2d912615e2\x2d4e10\x2da5bc\x2d6d32\x2d07b8564c3a49.mount: Deactivated successfully. Jan 17 00:24:57.884928 containerd[1603]: time="2026-01-17T00:24:57.884826420Z" level=info msg="TearDown network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" successfully" Jan 17 00:24:57.884995 containerd[1603]: time="2026-01-17T00:24:57.884931086Z" level=info msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" returns successfully" Jan 17 00:24:57.886117 containerd[1603]: time="2026-01-17T00:24:57.885992321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c64f7b875-k79d8,Uid:7904b8c1-aed5-4856-a748-a81b4e03c215,Namespace:calico-system,Attempt:1,}" Jan 17 00:24:58.330211 systemd-networkd[1265]: cali28ec92be2f1: Link UP Jan 17 00:24:58.331596 systemd-networkd[1265]: cali28ec92be2f1: Gained carrier Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.076 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0 calico-kube-controllers-6c64f7b875- calico-system 7904b8c1-aed5-4856-a748-a81b4e03c215 1103 0 2026-01-17 00:24:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c64f7b875 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c64f7b875-k79d8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali28ec92be2f1 [] [] }} ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.076 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.189 [INFO][4548] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" HandleID="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.190 [INFO][4548] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" HandleID="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c64f7b875-k79d8", "timestamp":"2026-01-17 00:24:58.189514608 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.190 [INFO][4548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.191 [INFO][4548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.192 [INFO][4548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.214 [INFO][4548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.236 [INFO][4548] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.274 [INFO][4548] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.282 [INFO][4548] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.288 [INFO][4548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.288 [INFO][4548] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.291 [INFO][4548] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308 Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.299 [INFO][4548] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.313 [INFO][4548] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.313 [INFO][4548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" host="localhost" Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.313 [INFO][4548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:58.387305 containerd[1603]: 2026-01-17 00:24:58.314 [INFO][4548] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" HandleID="k8s-pod-network.c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.391812 containerd[1603]: 2026-01-17 00:24:58.324 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0", GenerateName:"calico-kube-controllers-6c64f7b875-", Namespace:"calico-system", SelfLink:"", UID:"7904b8c1-aed5-4856-a748-a81b4e03c215", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c64f7b875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c64f7b875-k79d8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28ec92be2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:24:58.391812 containerd[1603]: 2026-01-17 00:24:58.324 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.391812 containerd[1603]: 2026-01-17 00:24:58.324 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28ec92be2f1 ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.391812 containerd[1603]: 2026-01-17 00:24:58.332 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.391812 containerd[1603]: 2026-01-17 00:24:58.334 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0", GenerateName:"calico-kube-controllers-6c64f7b875-", Namespace:"calico-system", SelfLink:"", UID:"7904b8c1-aed5-4856-a748-a81b4e03c215", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c64f7b875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308", Pod:"calico-kube-controllers-6c64f7b875-k79d8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28ec92be2f1", MAC:"52:a6:79:35:e8:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:24:58.391812 containerd[1603]: 2026-01-17 00:24:58.383 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308" Namespace="calico-system" Pod="calico-kube-controllers-6c64f7b875-k79d8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:24:58.435594 containerd[1603]: time="2026-01-17T00:24:58.431436734Z" level=info msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" Jan 17 00:24:58.499478 containerd[1603]: time="2026-01-17T00:24:58.498182837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:24:58.499478 containerd[1603]: time="2026-01-17T00:24:58.499120059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:24:58.499478 containerd[1603]: time="2026-01-17T00:24:58.499184970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:58.499478 containerd[1603]: time="2026-01-17T00:24:58.499427833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:58.578806 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:24:58.649840 containerd[1603]: time="2026-01-17T00:24:58.649685198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c64f7b875-k79d8,Uid:7904b8c1-aed5-4856-a748-a81b4e03c215,Namespace:calico-system,Attempt:1,} returns sandbox id \"c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308\"" Jan 17 00:24:58.653782 containerd[1603]: time="2026-01-17T00:24:58.653739280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.623 [INFO][4583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.623 [INFO][4583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" iface="eth0" netns="/var/run/netns/cni-30fe47c8-c905-f265-303d-64d08b5d9894" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.624 [INFO][4583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" iface="eth0" netns="/var/run/netns/cni-30fe47c8-c905-f265-303d-64d08b5d9894" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.624 [INFO][4583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" iface="eth0" netns="/var/run/netns/cni-30fe47c8-c905-f265-303d-64d08b5d9894" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.624 [INFO][4583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.624 [INFO][4583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.705 [INFO][4621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.705 [INFO][4621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.705 [INFO][4621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.719 [WARNING][4621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.719 [INFO][4621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.722 [INFO][4621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:58.730934 containerd[1603]: 2026-01-17 00:24:58.726 [INFO][4583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:24:58.730934 containerd[1603]: time="2026-01-17T00:24:58.730733874Z" level=info msg="TearDown network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" successfully" Jan 17 00:24:58.730934 containerd[1603]: time="2026-01-17T00:24:58.730771534Z" level=info msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" returns successfully" Jan 17 00:24:58.731884 containerd[1603]: time="2026-01-17T00:24:58.731833852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mdmw8,Uid:b773dda6-1d12-466d-8ab6-e9b4e6b1277a,Namespace:calico-system,Attempt:1,}" Jan 17 00:24:58.742687 containerd[1603]: time="2026-01-17T00:24:58.742619492Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:58.746648 containerd[1603]: time="2026-01-17T00:24:58.746516139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:24:58.746943 containerd[1603]: time="2026-01-17T00:24:58.746675005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:24:58.747143 kubelet[2735]: E0117 00:24:58.747050 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:24:58.748099 kubelet[2735]: E0117 00:24:58.747160 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:24:58.748099 kubelet[2735]: E0117 00:24:58.747429 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:58.749441 kubelet[2735]: E0117 00:24:58.749329 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:24:58.887467 systemd[1]: run-netns-cni\x2d30fe47c8\x2dc905\x2df265\x2d303d\x2d64d08b5d9894.mount: Deactivated successfully. Jan 17 00:24:59.019380 systemd-networkd[1265]: cali6946db907a6: Link UP Jan 17 00:24:59.029467 systemd-networkd[1265]: cali6946db907a6: Gained carrier Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.844 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mdmw8-eth0 goldmane-666569f655- calico-system b773dda6-1d12-466d-8ab6-e9b4e6b1277a 1108 0 2026-01-17 00:24:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mdmw8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6946db907a6 [] [] }} ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.845 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.905 [INFO][4649] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" HandleID="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.906 [INFO][4649] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" HandleID="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003435d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mdmw8", "timestamp":"2026-01-17 00:24:58.905820792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.906 [INFO][4649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.906 [INFO][4649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.907 [INFO][4649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.925 [INFO][4649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.948 [INFO][4649] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.961 [INFO][4649] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.965 [INFO][4649] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.969 [INFO][4649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.969 [INFO][4649] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.972 [INFO][4649] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:58.983 [INFO][4649] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:59.011 [INFO][4649] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:59.011 [INFO][4649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" host="localhost" Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:59.011 [INFO][4649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:59.057461 containerd[1603]: 2026-01-17 00:24:59.011 [INFO][4649] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" HandleID="k8s-pod-network.22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.058447 containerd[1603]: 2026-01-17 00:24:59.016 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mdmw8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b773dda6-1d12-466d-8ab6-e9b4e6b1277a", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mdmw8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6946db907a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:24:59.058447 containerd[1603]: 2026-01-17 00:24:59.016 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.058447 containerd[1603]: 2026-01-17 00:24:59.016 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6946db907a6 ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.058447 containerd[1603]: 2026-01-17 00:24:59.024 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.058447 containerd[1603]: 2026-01-17 00:24:59.025 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mdmw8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b773dda6-1d12-466d-8ab6-e9b4e6b1277a", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a", Pod:"goldmane-666569f655-mdmw8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6946db907a6", MAC:"8e:af:61:28:09:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:24:59.058447 containerd[1603]: 2026-01-17 00:24:59.049 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a" Namespace="calico-system" Pod="goldmane-666569f655-mdmw8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:24:59.090547 containerd[1603]: time="2026-01-17T00:24:59.090396839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:24:59.090547 containerd[1603]: time="2026-01-17T00:24:59.090494211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:24:59.090547 containerd[1603]: time="2026-01-17T00:24:59.090518707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:59.090729 containerd[1603]: time="2026-01-17T00:24:59.090642177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:59.157052 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:24:59.227207 containerd[1603]: time="2026-01-17T00:24:59.226345795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mdmw8,Uid:b773dda6-1d12-466d-8ab6-e9b4e6b1277a,Namespace:calico-system,Attempt:1,} returns sandbox id \"22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a\"" Jan 17 00:24:59.231876 containerd[1603]: time="2026-01-17T00:24:59.230708043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:24:59.232041 kubelet[2735]: E0117 00:24:59.231162 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:24:59.337381 containerd[1603]: time="2026-01-17T00:24:59.334737610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:59.340339 containerd[1603]: time="2026-01-17T00:24:59.338869658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:24:59.340339 containerd[1603]: time="2026-01-17T00:24:59.339002767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:24:59.340472 kubelet[2735]: E0117 00:24:59.339187 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:24:59.340472 kubelet[2735]: E0117 00:24:59.339319 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:24:59.340472 kubelet[2735]: E0117 00:24:59.339474 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmvxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:59.341052 kubelet[2735]: E0117 00:24:59.340994 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:24:59.437739 containerd[1603]: time="2026-01-17T00:24:59.436206159Z" level=info msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.659 [INFO][4719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.659 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" iface="eth0" netns="/var/run/netns/cni-d56293ab-de1c-2792-f86b-794acf48ad9c" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.659 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" iface="eth0" netns="/var/run/netns/cni-d56293ab-de1c-2792-f86b-794acf48ad9c" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.659 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" iface="eth0" netns="/var/run/netns/cni-d56293ab-de1c-2792-f86b-794acf48ad9c" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.659 [INFO][4719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.659 [INFO][4719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.731 [INFO][4733] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.731 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.731 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.747 [WARNING][4733] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.747 [INFO][4733] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.751 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:24:59.761035 containerd[1603]: 2026-01-17 00:24:59.754 [INFO][4719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:24:59.766640 containerd[1603]: time="2026-01-17T00:24:59.766540545Z" level=info msg="TearDown network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" successfully" Jan 17 00:24:59.766640 containerd[1603]: time="2026-01-17T00:24:59.766610886Z" level=info msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" returns successfully" Jan 17 00:24:59.767178 kubelet[2735]: E0117 00:24:59.767149 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:24:59.767593 systemd[1]: run-netns-cni\x2dd56293ab\x2dde1c\x2d2792\x2df86b\x2d794acf48ad9c.mount: Deactivated successfully. Jan 17 00:24:59.769110 containerd[1603]: time="2026-01-17T00:24:59.768349935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8cxw,Uid:cd16aa39-f128-48b4-a7b5-ac9f06328314,Namespace:kube-system,Attempt:1,}" Jan 17 00:25:00.132114 systemd-networkd[1265]: cali452605a9df2: Link UP Jan 17 00:25:00.136017 systemd-networkd[1265]: cali452605a9df2: Gained carrier Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.890 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0 coredns-668d6bf9bc- kube-system cd16aa39-f128-48b4-a7b5-ac9f06328314 1125 0 2026-01-17 00:23:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s8cxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali452605a9df2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.891 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.963 [INFO][4755] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" HandleID="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.964 [INFO][4755] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" HandleID="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s8cxw", "timestamp":"2026-01-17 00:24:59.963109542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.964 [INFO][4755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.964 [INFO][4755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.964 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:24:59.975 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.003 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.030 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.037 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.072 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.072 [INFO][4755] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.078 [INFO][4755] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45 Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.097 [INFO][4755] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.116 [INFO][4755] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.116 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" host="localhost" Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.116 [INFO][4755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:00.200341 containerd[1603]: 2026-01-17 00:25:00.117 [INFO][4755] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" HandleID="k8s-pod-network.9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.201461 containerd[1603]: 2026-01-17 00:25:00.124 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cd16aa39-f128-48b4-a7b5-ac9f06328314", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s8cxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali452605a9df2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:00.201461 containerd[1603]: 2026-01-17 00:25:00.125 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.201461 containerd[1603]: 2026-01-17 00:25:00.125 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali452605a9df2 ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.201461 containerd[1603]: 2026-01-17 00:25:00.138 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.201461 containerd[1603]: 2026-01-17 00:25:00.138 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cd16aa39-f128-48b4-a7b5-ac9f06328314", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45", Pod:"coredns-668d6bf9bc-s8cxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali452605a9df2", MAC:"82:f6:19:29:25:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:00.201461 containerd[1603]: 2026-01-17 00:25:00.172 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45" Namespace="kube-system" Pod="coredns-668d6bf9bc-s8cxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:00.224418 systemd-networkd[1265]: cali28ec92be2f1: Gained IPv6LL Jan 17 00:25:00.258784 kubelet[2735]: E0117 00:25:00.258516 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:25:00.261596 kubelet[2735]: E0117 00:25:00.259608 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:25:00.297363 containerd[1603]: time="2026-01-17T00:25:00.295997488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:00.297363 containerd[1603]: time="2026-01-17T00:25:00.296059845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:00.297363 containerd[1603]: time="2026-01-17T00:25:00.296095201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:00.297363 containerd[1603]: time="2026-01-17T00:25:00.296314040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:00.437520 containerd[1603]: time="2026-01-17T00:25:00.432387492Z" level=info msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" Jan 17 00:25:00.473579 systemd-networkd[1265]: cali6946db907a6: Gained IPv6LL Jan 17 00:25:00.475537 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:25:00.601295 containerd[1603]: time="2026-01-17T00:25:00.600972771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s8cxw,Uid:cd16aa39-f128-48b4-a7b5-ac9f06328314,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45\"" Jan 17 00:25:00.604377 kubelet[2735]: E0117 00:25:00.603576 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:00.611458 containerd[1603]: time="2026-01-17T00:25:00.611360624Z" level=info msg="CreateContainer within sandbox \"9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:25:00.700738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831536735.mount: Deactivated successfully. Jan 17 00:25:00.724985 containerd[1603]: time="2026-01-17T00:25:00.724810635Z" level=info msg="CreateContainer within sandbox \"9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8c2cd64b3f7c1d87d5d680c0d1f4ea478cec3e5b7cc04b4205c6fb034e4751a\"" Jan 17 00:25:00.727727 containerd[1603]: time="2026-01-17T00:25:00.727621827Z" level=info msg="StartContainer for \"d8c2cd64b3f7c1d87d5d680c0d1f4ea478cec3e5b7cc04b4205c6fb034e4751a\"" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.644 [INFO][4819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.648 [INFO][4819] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" iface="eth0" netns="/var/run/netns/cni-e6d221c9-2686-537b-f01c-3e3b82827fa2" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.652 [INFO][4819] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" iface="eth0" netns="/var/run/netns/cni-e6d221c9-2686-537b-f01c-3e3b82827fa2" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.653 [INFO][4819] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" iface="eth0" netns="/var/run/netns/cni-e6d221c9-2686-537b-f01c-3e3b82827fa2" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.672 [INFO][4819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.673 [INFO][4819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.734 [INFO][4834] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.741 [INFO][4834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.741 [INFO][4834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.773 [WARNING][4834] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.773 [INFO][4834] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.780 [INFO][4834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:00.801186 containerd[1603]: 2026-01-17 00:25:00.786 [INFO][4819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:00.807625 containerd[1603]: time="2026-01-17T00:25:00.807369801Z" level=info msg="TearDown network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" successfully" Jan 17 00:25:00.807625 containerd[1603]: time="2026-01-17T00:25:00.807461413Z" level=info msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" returns successfully" Jan 17 00:25:00.810946 containerd[1603]: time="2026-01-17T00:25:00.810725570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pzcck,Uid:bdf7dcb1-7f01-49ed-b25d-dd851c91e195,Namespace:calico-system,Attempt:1,}" Jan 17 00:25:00.884752 systemd[1]: run-netns-cni\x2de6d221c9\x2d2686\x2d537b\x2df01c\x2d3e3b82827fa2.mount: Deactivated successfully. Jan 17 00:25:00.919318 containerd[1603]: time="2026-01-17T00:25:00.918538107Z" level=info msg="StartContainer for \"d8c2cd64b3f7c1d87d5d680c0d1f4ea478cec3e5b7cc04b4205c6fb034e4751a\" returns successfully" Jan 17 00:25:01.265013 kubelet[2735]: E0117 00:25:01.263038 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:01.270478 kubelet[2735]: E0117 00:25:01.270438 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:25:01.327298 kubelet[2735]: I0117 00:25:01.325054 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s8cxw" podStartSLOduration=79.325031605 podStartE2EDuration="1m19.325031605s" podCreationTimestamp="2026-01-17 00:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:25:01.324674309 +0000 UTC m=+82.539628421" watchObservedRunningTime="2026-01-17 00:25:01.325031605 +0000 UTC m=+82.539985746" Jan 17 00:25:01.373334 systemd-networkd[1265]: calif32be8fd437: Link UP Jan 17 00:25:01.374164 systemd-networkd[1265]: calif32be8fd437: Gained carrier Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:00.945 [INFO][4867] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pzcck-eth0 csi-node-driver- calico-system bdf7dcb1-7f01-49ed-b25d-dd851c91e195 1140 0 2026-01-17 00:24:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pzcck eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif32be8fd437 [] [] }} ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:00.945 [INFO][4867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.032 [INFO][4888] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" HandleID="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.033 [INFO][4888] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" HandleID="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pzcck", "timestamp":"2026-01-17 00:25:01.032789113 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.033 [INFO][4888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.033 [INFO][4888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.033 [INFO][4888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.067 [INFO][4888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.231 [INFO][4888] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.266 [INFO][4888] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.277 [INFO][4888] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.286 [INFO][4888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.286 [INFO][4888] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.308 [INFO][4888] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965 Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.321 [INFO][4888] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.346 [INFO][4888] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.346 [INFO][4888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" host="localhost" Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.346 [INFO][4888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:01.407598 containerd[1603]: 2026-01-17 00:25:01.346 [INFO][4888] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" HandleID="k8s-pod-network.d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.408837 containerd[1603]: 2026-01-17 00:25:01.356 [INFO][4867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pzcck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdf7dcb1-7f01-49ed-b25d-dd851c91e195", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pzcck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif32be8fd437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:01.408837 containerd[1603]: 2026-01-17 00:25:01.368 [INFO][4867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.408837 containerd[1603]: 2026-01-17 00:25:01.368 [INFO][4867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif32be8fd437 ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.408837 containerd[1603]: 2026-01-17 00:25:01.374 [INFO][4867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.408837 containerd[1603]: 2026-01-17 00:25:01.376 [INFO][4867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pzcck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdf7dcb1-7f01-49ed-b25d-dd851c91e195", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965", Pod:"csi-node-driver-pzcck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif32be8fd437", MAC:"4a:71:11:8b:ca:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:01.408837 containerd[1603]: 2026-01-17 00:25:01.403 [INFO][4867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965" Namespace="calico-system" Pod="csi-node-driver-pzcck" WorkloadEndpoint="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:01.427523 containerd[1603]: time="2026-01-17T00:25:01.427470418Z" level=info msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" Jan 17 00:25:01.488307 containerd[1603]: time="2026-01-17T00:25:01.488049764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:01.488307 containerd[1603]: time="2026-01-17T00:25:01.488146274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:01.488307 containerd[1603]: time="2026-01-17T00:25:01.488164528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:01.488536 containerd[1603]: time="2026-01-17T00:25:01.488371935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:01.566360 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:25:01.613380 containerd[1603]: time="2026-01-17T00:25:01.613329585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pzcck,Uid:bdf7dcb1-7f01-49ed-b25d-dd851c91e195,Namespace:calico-system,Attempt:1,} returns sandbox id \"d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965\"" Jan 17 00:25:01.622293 containerd[1603]: time="2026-01-17T00:25:01.622160640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.595 [INFO][4929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.595 [INFO][4929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" iface="eth0" netns="/var/run/netns/cni-889d97e8-bb0b-0a63-a622-cf6c5f25bb08" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.596 [INFO][4929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" iface="eth0" netns="/var/run/netns/cni-889d97e8-bb0b-0a63-a622-cf6c5f25bb08" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.596 [INFO][4929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" iface="eth0" netns="/var/run/netns/cni-889d97e8-bb0b-0a63-a622-cf6c5f25bb08" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.596 [INFO][4929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.596 [INFO][4929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.640 [INFO][4973] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.641 [INFO][4973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.641 [INFO][4973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.652 [WARNING][4973] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.652 [INFO][4973] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.665 [INFO][4973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:01.674090 containerd[1603]: 2026-01-17 00:25:01.669 [INFO][4929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:01.675398 containerd[1603]: time="2026-01-17T00:25:01.675329899Z" level=info msg="TearDown network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" successfully" Jan 17 00:25:01.675398 containerd[1603]: time="2026-01-17T00:25:01.675395632Z" level=info msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" returns successfully" Jan 17 00:25:01.677688 containerd[1603]: time="2026-01-17T00:25:01.677578584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-fb2xv,Uid:e3787079-d3c5-4000-91a5-36b644436b7f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:25:01.681036 systemd[1]: run-netns-cni\x2d889d97e8\x2dbb0b\x2d0a63\x2da622\x2dcf6c5f25bb08.mount: Deactivated successfully. Jan 17 00:25:01.700281 containerd[1603]: time="2026-01-17T00:25:01.700146314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:01.709304 containerd[1603]: time="2026-01-17T00:25:01.709268616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:25:01.709642 containerd[1603]: time="2026-01-17T00:25:01.709421062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:25:01.710811 kubelet[2735]: E0117 00:25:01.710077 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:25:01.710811 kubelet[2735]: E0117 00:25:01.710146 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:25:01.710811 kubelet[2735]: E0117 00:25:01.710372 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:01.716080 containerd[1603]: time="2026-01-17T00:25:01.716051277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:25:01.813196 containerd[1603]: time="2026-01-17T00:25:01.813082655Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:01.817946 containerd[1603]: time="2026-01-17T00:25:01.817215335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:25:01.817946 containerd[1603]: time="2026-01-17T00:25:01.817382637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:25:01.818707 kubelet[2735]: E0117 00:25:01.818426 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:25:01.819360 kubelet[2735]: E0117 00:25:01.819089 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:25:01.820536 kubelet[2735]: E0117 00:25:01.820463 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:01.823371 kubelet[2735]: E0117 00:25:01.822597 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:02.010788 systemd-networkd[1265]: cali89249ecdf61: Link UP Jan 17 00:25:02.011681 systemd-networkd[1265]: cali89249ecdf61: Gained carrier Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.793 [INFO][4984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0 calico-apiserver-575b9f78b6- calico-apiserver e3787079-d3c5-4000-91a5-36b644436b7f 1159 0 2026-01-17 00:24:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575b9f78b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-575b9f78b6-fb2xv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89249ecdf61 [] [] }} ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.794 [INFO][4984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.870 [INFO][4998] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" HandleID="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.870 [INFO][4998] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" HandleID="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033b7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-575b9f78b6-fb2xv", "timestamp":"2026-01-17 00:25:01.870725925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.870 [INFO][4998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.871 [INFO][4998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.871 [INFO][4998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.892 [INFO][4998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.918 [INFO][4998] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.928 [INFO][4998] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.934 [INFO][4998] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.938 [INFO][4998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.938 [INFO][4998] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.947 [INFO][4998] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356 Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.972 [INFO][4998] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.990 [INFO][4998] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.991 [INFO][4998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" host="localhost" Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.991 [INFO][4998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:02.049475 containerd[1603]: 2026-01-17 00:25:01.991 [INFO][4998] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" HandleID="k8s-pod-network.0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.050850 containerd[1603]: 2026-01-17 00:25:01.998 [INFO][4984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3787079-d3c5-4000-91a5-36b644436b7f", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-575b9f78b6-fb2xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89249ecdf61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:02.050850 containerd[1603]: 2026-01-17 00:25:01.998 [INFO][4984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.050850 containerd[1603]: 2026-01-17 00:25:01.998 [INFO][4984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89249ecdf61 ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.050850 containerd[1603]: 2026-01-17 00:25:02.014 [INFO][4984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.050850 containerd[1603]: 2026-01-17 00:25:02.015 [INFO][4984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3787079-d3c5-4000-91a5-36b644436b7f", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356", Pod:"calico-apiserver-575b9f78b6-fb2xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89249ecdf61", MAC:"ae:b0:c3:13:31:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:02.050850 containerd[1603]: 2026-01-17 00:25:02.042 [INFO][4984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-fb2xv" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:02.135628 systemd-networkd[1265]: cali452605a9df2: Gained IPv6LL Jan 17 00:25:02.139397 containerd[1603]: time="2026-01-17T00:25:02.138974878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:02.139397 containerd[1603]: time="2026-01-17T00:25:02.139069484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:02.139397 containerd[1603]: time="2026-01-17T00:25:02.139084814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:02.140616 containerd[1603]: time="2026-01-17T00:25:02.140493294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:02.199062 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:25:02.276516 kubelet[2735]: E0117 00:25:02.276429 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:02.277489 kubelet[2735]: E0117 00:25:02.277420 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:02.278109 containerd[1603]: time="2026-01-17T00:25:02.278030057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-fb2xv,Uid:e3787079-d3c5-4000-91a5-36b644436b7f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356\"" Jan 17 00:25:02.281039 containerd[1603]: time="2026-01-17T00:25:02.280901317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:25:02.361187 containerd[1603]: time="2026-01-17T00:25:02.360675194Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:02.367093 containerd[1603]: time="2026-01-17T00:25:02.364960840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:25:02.367093 containerd[1603]: time="2026-01-17T00:25:02.365063099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:02.371870 kubelet[2735]: E0117 00:25:02.371591 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:02.371870 kubelet[2735]: E0117 00:25:02.371660 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:02.371870 kubelet[2735]: E0117 00:25:02.371803 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9wrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:02.373405 kubelet[2735]: E0117 00:25:02.373348 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:25:02.427991 containerd[1603]: time="2026-01-17T00:25:02.427451033Z" level=info msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" Jan 17 00:25:02.427991 containerd[1603]: time="2026-01-17T00:25:02.427482419Z" level=info msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.589 [INFO][5081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.590 [INFO][5081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" iface="eth0" netns="/var/run/netns/cni-bfe78eed-3558-bb48-030a-a2c8d37ec786" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.590 [INFO][5081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" iface="eth0" netns="/var/run/netns/cni-bfe78eed-3558-bb48-030a-a2c8d37ec786" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.591 [INFO][5081] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" iface="eth0" netns="/var/run/netns/cni-bfe78eed-3558-bb48-030a-a2c8d37ec786" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.591 [INFO][5081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.591 [INFO][5081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.686 [INFO][5098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.686 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.687 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.710 [WARNING][5098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.710 [INFO][5098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.716 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:02.732704 containerd[1603]: 2026-01-17 00:25:02.720 [INFO][5081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:02.741827 containerd[1603]: time="2026-01-17T00:25:02.741441005Z" level=info msg="TearDown network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" successfully" Jan 17 00:25:02.741827 containerd[1603]: time="2026-01-17T00:25:02.741544108Z" level=info msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" returns successfully" Jan 17 00:25:02.745331 kubelet[2735]: E0117 00:25:02.742370 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:02.745475 containerd[1603]: time="2026-01-17T00:25:02.743332115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glqqh,Uid:388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c,Namespace:kube-system,Attempt:1,}" Jan 17 00:25:02.746781 systemd[1]: run-netns-cni\x2dbfe78eed\x2d3558\x2dbb48\x2d030a\x2da2c8d37ec786.mount: Deactivated successfully. Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.620 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.621 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" iface="eth0" netns="/var/run/netns/cni-0d6ed120-572c-272c-9deb-9970761d932d" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.622 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" iface="eth0" netns="/var/run/netns/cni-0d6ed120-572c-272c-9deb-9970761d932d" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.622 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" iface="eth0" netns="/var/run/netns/cni-0d6ed120-572c-272c-9deb-9970761d932d" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.622 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.622 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.696 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.698 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.716 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.734 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.734 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.742 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:02.767174 containerd[1603]: 2026-01-17 00:25:02.753 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:02.772317 containerd[1603]: time="2026-01-17T00:25:02.771214086Z" level=info msg="TearDown network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" successfully" Jan 17 00:25:02.772317 containerd[1603]: time="2026-01-17T00:25:02.771351733Z" level=info msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" returns successfully" Jan 17 00:25:02.772603 containerd[1603]: time="2026-01-17T00:25:02.772562355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-2wpqn,Uid:16ee4324-8757-4618-9329-530899bfb3f8,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:25:02.774095 systemd[1]: run-netns-cni\x2d0d6ed120\x2d572c\x2d272c\x2d9deb\x2d9970761d932d.mount: Deactivated successfully. Jan 17 00:25:03.033463 systemd-networkd[1265]: calic2fa3f8e3e2: Link UP Jan 17 00:25:03.034614 systemd-networkd[1265]: calic2fa3f8e3e2: Gained carrier Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.871 [INFO][5115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--glqqh-eth0 coredns-668d6bf9bc- kube-system 388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c 1186 0 2026-01-17 00:23:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-glqqh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic2fa3f8e3e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.871 [INFO][5115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.946 [INFO][5146] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" HandleID="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.947 [INFO][5146] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" HandleID="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-glqqh", "timestamp":"2026-01-17 00:25:02.946709586 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.947 [INFO][5146] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.947 [INFO][5146] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.947 [INFO][5146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.961 [INFO][5146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.971 [INFO][5146] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.985 [INFO][5146] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.989 [INFO][5146] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.994 [INFO][5146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.994 [INFO][5146] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:02.997 [INFO][5146] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6 Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:03.004 [INFO][5146] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:03.017 [INFO][5146] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:03.017 [INFO][5146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" host="localhost" Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:03.017 [INFO][5146] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:03.081804 containerd[1603]: 2026-01-17 00:25:03.017 [INFO][5146] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" HandleID="k8s-pod-network.21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.084747 containerd[1603]: 2026-01-17 00:25:03.023 [INFO][5115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--glqqh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-glqqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2fa3f8e3e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:03.084747 containerd[1603]: 2026-01-17 00:25:03.023 [INFO][5115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.084747 containerd[1603]: 2026-01-17 00:25:03.023 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2fa3f8e3e2 ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.084747 containerd[1603]: 2026-01-17 00:25:03.032 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.084747 containerd[1603]: 2026-01-17 00:25:03.036 [INFO][5115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--glqqh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6", Pod:"coredns-668d6bf9bc-glqqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2fa3f8e3e2", MAC:"de:3d:93:d6:b5:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:03.084747 containerd[1603]: 2026-01-17 00:25:03.075 [INFO][5115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6" Namespace="kube-system" Pod="coredns-668d6bf9bc-glqqh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:03.139968 containerd[1603]: time="2026-01-17T00:25:03.139654078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:03.140349 containerd[1603]: time="2026-01-17T00:25:03.140195129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:03.140349 containerd[1603]: time="2026-01-17T00:25:03.140215538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:03.141013 containerd[1603]: time="2026-01-17T00:25:03.140808085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:03.183206 systemd-networkd[1265]: cali5bafa2e964a: Link UP Jan 17 00:25:03.184401 systemd-networkd[1265]: cali5bafa2e964a: Gained carrier Jan 17 00:25:03.211592 systemd[1]: run-containerd-runc-k8s.io-21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6-runc.rVzZSi.mount: Deactivated successfully. Jan 17 00:25:03.223775 systemd-networkd[1265]: calif32be8fd437: Gained IPv6LL Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:02.888 [INFO][5130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0 calico-apiserver-575b9f78b6- calico-apiserver 16ee4324-8757-4618-9329-530899bfb3f8 1187 0 2026-01-17 00:24:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575b9f78b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-575b9f78b6-2wpqn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5bafa2e964a [] [] }} ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:02.888 [INFO][5130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:02.951 [INFO][5152] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" HandleID="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:02.951 [INFO][5152] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" HandleID="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-575b9f78b6-2wpqn", "timestamp":"2026-01-17 00:25:02.951163847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:02.951 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.018 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.018 [INFO][5152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.068 [INFO][5152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.088 [INFO][5152] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.102 [INFO][5152] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.108 [INFO][5152] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.117 [INFO][5152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.117 [INFO][5152] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.120 [INFO][5152] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43 Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.139 [INFO][5152] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.171 [INFO][5152] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.171 [INFO][5152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" host="localhost" Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.172 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:03.240893 containerd[1603]: 2026-01-17 00:25:03.172 [INFO][5152] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" HandleID="k8s-pod-network.221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.241964 containerd[1603]: 2026-01-17 00:25:03.176 [INFO][5130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ee4324-8757-4618-9329-530899bfb3f8", ResourceVersion:"1187", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-575b9f78b6-2wpqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bafa2e964a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:03.241964 containerd[1603]: 2026-01-17 00:25:03.176 [INFO][5130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.241964 containerd[1603]: 2026-01-17 00:25:03.176 [INFO][5130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bafa2e964a ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.241964 containerd[1603]: 2026-01-17 00:25:03.184 [INFO][5130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.241964 containerd[1603]: 2026-01-17 00:25:03.185 [INFO][5130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ee4324-8757-4618-9329-530899bfb3f8", ResourceVersion:"1187", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43", Pod:"calico-apiserver-575b9f78b6-2wpqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bafa2e964a", MAC:"26:7e:b2:f5:03:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:03.241964 containerd[1603]: 2026-01-17 00:25:03.217 [INFO][5130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43" Namespace="calico-apiserver" Pod="calico-apiserver-575b9f78b6-2wpqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:03.259122 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:25:03.282089 kubelet[2735]: E0117 00:25:03.280950 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:03.284605 kubelet[2735]: E0117 00:25:03.283904 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:25:03.288827 kubelet[2735]: E0117 00:25:03.286954 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:03.338493 containerd[1603]: time="2026-01-17T00:25:03.337356178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glqqh,Uid:388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c,Namespace:kube-system,Attempt:1,} returns sandbox id \"21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6\"" Jan 17 00:25:03.339469 kubelet[2735]: E0117 00:25:03.339042 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:03.347750 containerd[1603]: time="2026-01-17T00:25:03.344772812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:03.347750 containerd[1603]: time="2026-01-17T00:25:03.344830840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:03.347750 containerd[1603]: time="2026-01-17T00:25:03.344846059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:03.351346 containerd[1603]: time="2026-01-17T00:25:03.350137406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:03.354292 containerd[1603]: time="2026-01-17T00:25:03.352819342Z" level=info msg="CreateContainer within sandbox \"21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:25:03.423770 containerd[1603]: time="2026-01-17T00:25:03.423591916Z" level=info msg="CreateContainer within sandbox \"21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79cc02341c52c5f8608ab8c5d3fcc2d150825fed3c20c5bef50fd602aaaa0ec5\"" Jan 17 00:25:03.427045 containerd[1603]: time="2026-01-17T00:25:03.426082235Z" level=info msg="StartContainer for \"79cc02341c52c5f8608ab8c5d3fcc2d150825fed3c20c5bef50fd602aaaa0ec5\"" Jan 17 00:25:03.435521 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:25:03.523153 containerd[1603]: time="2026-01-17T00:25:03.520431212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b9f78b6-2wpqn,Uid:16ee4324-8757-4618-9329-530899bfb3f8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43\"" Jan 17 00:25:03.525962 containerd[1603]: time="2026-01-17T00:25:03.525417531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:25:03.559412 containerd[1603]: time="2026-01-17T00:25:03.556533376Z" level=info msg="StartContainer for \"79cc02341c52c5f8608ab8c5d3fcc2d150825fed3c20c5bef50fd602aaaa0ec5\" returns successfully" Jan 17 00:25:03.614560 containerd[1603]: time="2026-01-17T00:25:03.614504698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:03.619441 containerd[1603]: time="2026-01-17T00:25:03.619048475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:25:03.620210 kubelet[2735]: E0117 00:25:03.619670 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:03.620210 kubelet[2735]: E0117 00:25:03.619729 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:03.620210 kubelet[2735]: E0117 00:25:03.619884 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r9hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:03.622220 containerd[1603]: time="2026-01-17T00:25:03.619161616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:03.622344 kubelet[2735]: E0117 00:25:03.621062 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:03.735464 systemd-networkd[1265]: cali89249ecdf61: Gained IPv6LL Jan 17 00:25:04.251195 systemd-networkd[1265]: cali5bafa2e964a: Gained IPv6LL Jan 17 00:25:04.287734 kubelet[2735]: E0117 00:25:04.287686 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:04.295080 kubelet[2735]: E0117 00:25:04.292903 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:04.296070 kubelet[2735]: E0117 00:25:04.294877 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:25:04.426681 kubelet[2735]: E0117 00:25:04.426607 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:04.458600 kubelet[2735]: I0117 00:25:04.457426 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-glqqh" podStartSLOduration=82.457400153 podStartE2EDuration="1m22.457400153s" podCreationTimestamp="2026-01-17 00:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:25:04.381438546 +0000 UTC m=+85.596392649" watchObservedRunningTime="2026-01-17 00:25:04.457400153 +0000 UTC m=+85.672354256" Jan 17 00:25:04.951301 systemd-networkd[1265]: calic2fa3f8e3e2: Gained IPv6LL Jan 17 00:25:05.307202 kubelet[2735]: E0117 00:25:05.306009 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:05.313173 kubelet[2735]: E0117 00:25:05.309984 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:06.312088 kubelet[2735]: E0117 00:25:06.310822 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:08.429662 containerd[1603]: time="2026-01-17T00:25:08.428710348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:25:08.534860 containerd[1603]: time="2026-01-17T00:25:08.533886901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:08.545390 containerd[1603]: time="2026-01-17T00:25:08.545174495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:25:08.545557 containerd[1603]: time="2026-01-17T00:25:08.545333132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:25:08.545907 kubelet[2735]: E0117 00:25:08.545816 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:25:08.546773 kubelet[2735]: E0117 00:25:08.545909 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:25:08.546773 kubelet[2735]: E0117 00:25:08.546077 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f46c655c0c40418aada782a2b06c3fb5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:08.553884 containerd[1603]: time="2026-01-17T00:25:08.553527816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:25:08.642420 containerd[1603]: time="2026-01-17T00:25:08.641461847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:08.663399 containerd[1603]: time="2026-01-17T00:25:08.652423373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:25:08.663399 containerd[1603]: time="2026-01-17T00:25:08.652536324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:25:08.663610 kubelet[2735]: E0117 00:25:08.653137 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:25:08.663610 kubelet[2735]: E0117 00:25:08.653195 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:25:08.663610 kubelet[2735]: E0117 00:25:08.653557 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:08.663610 kubelet[2735]: E0117 00:25:08.654801 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:25:11.429468 containerd[1603]: time="2026-01-17T00:25:11.428745096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:25:11.550916 containerd[1603]: time="2026-01-17T00:25:11.543411341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:11.562641 containerd[1603]: time="2026-01-17T00:25:11.562567047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:25:11.562641 containerd[1603]: time="2026-01-17T00:25:11.562672544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:25:11.563529 kubelet[2735]: E0117 00:25:11.563189 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:25:11.566534 kubelet[2735]: E0117 00:25:11.563596 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:25:11.566534 kubelet[2735]: E0117 00:25:11.564077 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:11.566534 kubelet[2735]: E0117 00:25:11.565210 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:25:14.428991 kubelet[2735]: E0117 00:25:14.427769 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:16.430187 containerd[1603]: time="2026-01-17T00:25:16.428388964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:25:16.526332 containerd[1603]: time="2026-01-17T00:25:16.526126214Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:16.528127 containerd[1603]: time="2026-01-17T00:25:16.528077087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:25:16.528365 containerd[1603]: time="2026-01-17T00:25:16.528161341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:25:16.528481 kubelet[2735]: E0117 00:25:16.528351 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:25:16.528481 kubelet[2735]: E0117 00:25:16.528397 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:25:16.529359 kubelet[2735]: E0117 00:25:16.528620 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:16.529805 containerd[1603]: time="2026-01-17T00:25:16.529214302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:25:16.639397 containerd[1603]: time="2026-01-17T00:25:16.639303202Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:16.645007 containerd[1603]: time="2026-01-17T00:25:16.644899865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:25:16.645123 containerd[1603]: time="2026-01-17T00:25:16.645066166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:16.647857 kubelet[2735]: E0117 00:25:16.646646 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:25:16.647857 kubelet[2735]: E0117 00:25:16.646709 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:25:16.647857 kubelet[2735]: E0117 00:25:16.647210 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmvxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:16.648866 containerd[1603]: time="2026-01-17T00:25:16.648555910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:25:16.650435 kubelet[2735]: E0117 00:25:16.649184 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:25:16.757874 containerd[1603]: time="2026-01-17T00:25:16.757743063Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:16.764415 containerd[1603]: time="2026-01-17T00:25:16.764332112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:25:16.764486 containerd[1603]: time="2026-01-17T00:25:16.764446745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:25:16.764986 kubelet[2735]: E0117 00:25:16.764803 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:25:16.764986 kubelet[2735]: E0117 00:25:16.764891 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:25:16.765106 kubelet[2735]: E0117 00:25:16.765064 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:16.767532 kubelet[2735]: E0117 00:25:16.766317 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:17.432655 kubelet[2735]: E0117 00:25:17.431113 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:19.445762 containerd[1603]: time="2026-01-17T00:25:19.443715046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:25:19.545705 containerd[1603]: time="2026-01-17T00:25:19.545559768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:19.550845 containerd[1603]: time="2026-01-17T00:25:19.550709327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:25:19.551017 containerd[1603]: time="2026-01-17T00:25:19.550840723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:19.551096 kubelet[2735]: E0117 00:25:19.551044 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:19.551805 kubelet[2735]: E0117 00:25:19.551109 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:19.551805 kubelet[2735]: E0117 00:25:19.551357 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9wrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:19.558591 kubelet[2735]: E0117 00:25:19.558165 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:25:20.429711 containerd[1603]: time="2026-01-17T00:25:20.429635968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:25:20.511469 containerd[1603]: time="2026-01-17T00:25:20.511415914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:20.515165 containerd[1603]: time="2026-01-17T00:25:20.515093152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:25:20.515408 containerd[1603]: time="2026-01-17T00:25:20.515306520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:20.515541 kubelet[2735]: E0117 00:25:20.515475 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:20.515675 kubelet[2735]: E0117 00:25:20.515581 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:20.517126 kubelet[2735]: E0117 00:25:20.515754 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r9hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:20.517568 kubelet[2735]: E0117 00:25:20.517516 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:21.434684 kubelet[2735]: E0117 00:25:21.430423 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:24.433834 kubelet[2735]: E0117 00:25:24.432892 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:25:25.440940 kubelet[2735]: E0117 00:25:25.437535 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:25:27.451349 kubelet[2735]: E0117 00:25:27.449901 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:31.446076 kubelet[2735]: E0117 00:25:31.445792 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:31.446076 kubelet[2735]: E0117 00:25:31.445807 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:25:34.435932 kubelet[2735]: E0117 00:25:34.432121 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:25:36.429272 containerd[1603]: time="2026-01-17T00:25:36.428335072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:25:36.543891 containerd[1603]: time="2026-01-17T00:25:36.543796020Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:36.547649 containerd[1603]: time="2026-01-17T00:25:36.546152927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:25:36.547649 containerd[1603]: time="2026-01-17T00:25:36.546315821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:25:36.547804 kubelet[2735]: E0117 00:25:36.546489 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:25:36.547804 kubelet[2735]: E0117 00:25:36.546556 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:25:36.547804 kubelet[2735]: E0117 00:25:36.546709 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:36.566869 kubelet[2735]: E0117 00:25:36.552446 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:25:37.664003 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Jan 17 00:25:37.824073 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:25:37.829612 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:37.865219 systemd-logind[1580]: New session 8 of user core. Jan 17 00:25:37.876163 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:25:38.291670 sshd[5364]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:38.301796 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:38930.service: Deactivated successfully. Jan 17 00:25:38.319530 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:25:38.323465 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:25:38.325775 systemd-logind[1580]: Removed session 8. Jan 17 00:25:39.242721 containerd[1603]: time="2026-01-17T00:25:39.242331523Z" level=info msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" Jan 17 00:25:39.435057 containerd[1603]: time="2026-01-17T00:25:39.434178224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:25:39.568432 containerd[1603]: time="2026-01-17T00:25:39.565829028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:39.577201 containerd[1603]: time="2026-01-17T00:25:39.573725977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:25:39.577201 containerd[1603]: time="2026-01-17T00:25:39.573890624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:25:39.577443 kubelet[2735]: E0117 00:25:39.574605 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:25:39.577443 kubelet[2735]: E0117 00:25:39.574669 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:25:39.577443 kubelet[2735]: E0117 00:25:39.574797 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f46c655c0c40418aada782a2b06c3fb5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:39.618826 containerd[1603]: time="2026-01-17T00:25:39.606558791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.532 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0", GenerateName:"calico-kube-controllers-6c64f7b875-", Namespace:"calico-system", SelfLink:"", UID:"7904b8c1-aed5-4856-a748-a81b4e03c215", ResourceVersion:"1362", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c64f7b875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308", Pod:"calico-kube-controllers-6c64f7b875-k79d8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28ec92be2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.532 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.532 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" iface="eth0" netns="" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.532 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.532 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.697 [INFO][5399] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.697 [INFO][5399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.697 [INFO][5399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.717 [WARNING][5399] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.717 [INFO][5399] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.723 [INFO][5399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:39.745326 containerd[1603]: 2026-01-17 00:25:39.738 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:39.745326 containerd[1603]: time="2026-01-17T00:25:39.743739909Z" level=info msg="TearDown network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" successfully" Jan 17 00:25:39.745326 containerd[1603]: time="2026-01-17T00:25:39.743773271Z" level=info msg="StopPodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" returns successfully" Jan 17 00:25:39.745326 containerd[1603]: time="2026-01-17T00:25:39.744656428Z" level=info msg="RemovePodSandbox for \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" Jan 17 00:25:39.763281 containerd[1603]: time="2026-01-17T00:25:39.753729277Z" level=info msg="Forcibly stopping sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\"" Jan 17 00:25:39.788696 containerd[1603]: time="2026-01-17T00:25:39.788405805Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:39.798331 containerd[1603]: time="2026-01-17T00:25:39.797560314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:25:39.799120 containerd[1603]: time="2026-01-17T00:25:39.798907489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:25:39.804923 kubelet[2735]: E0117 00:25:39.799488 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:25:39.804923 kubelet[2735]: E0117 00:25:39.799553 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:25:39.804923 kubelet[2735]: E0117 00:25:39.799689 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:39.804923 kubelet[2735]: E0117 00:25:39.800817 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:39.894 [WARNING][5416] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0", GenerateName:"calico-kube-controllers-6c64f7b875-", Namespace:"calico-system", SelfLink:"", UID:"7904b8c1-aed5-4856-a748-a81b4e03c215", ResourceVersion:"1362", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c64f7b875", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c58182afcd92009aa0b5c817dbee9bf858253a4528efc47f43e2417f1b363308", Pod:"calico-kube-controllers-6c64f7b875-k79d8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28ec92be2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:39.895 [INFO][5416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:39.896 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" iface="eth0" netns="" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:39.896 [INFO][5416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:39.897 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:39.995 [INFO][5424] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:40.007 [INFO][5424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:40.007 [INFO][5424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:40.033 [WARNING][5424] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:40.034 [INFO][5424] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" HandleID="k8s-pod-network.1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Workload="localhost-k8s-calico--kube--controllers--6c64f7b875--k79d8-eth0" Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:40.042 [INFO][5424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:40.070856 containerd[1603]: 2026-01-17 00:25:40.060 [INFO][5416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b" Jan 17 00:25:40.071585 containerd[1603]: time="2026-01-17T00:25:40.071501839Z" level=info msg="TearDown network for sandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" successfully" Jan 17 00:25:40.096361 containerd[1603]: time="2026-01-17T00:25:40.096013725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:40.096361 containerd[1603]: time="2026-01-17T00:25:40.096158095Z" level=info msg="RemovePodSandbox \"1b4b16b9aa5806a142b801be7ade11f9e2ad77cfa192a75cb79f73280a8bd77b\" returns successfully" Jan 17 00:25:40.097298 containerd[1603]: time="2026-01-17T00:25:40.096875275Z" level=info msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.325 [WARNING][5441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3787079-d3c5-4000-91a5-36b644436b7f", ResourceVersion:"1355", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356", Pod:"calico-apiserver-575b9f78b6-fb2xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89249ecdf61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.326 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.326 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" iface="eth0" netns="" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.326 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.326 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.416 [INFO][5449] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.416 [INFO][5449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.416 [INFO][5449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.432 [WARNING][5449] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.432 [INFO][5449] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.437 [INFO][5449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:40.451569 containerd[1603]: 2026-01-17 00:25:40.444 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.451569 containerd[1603]: time="2026-01-17T00:25:40.451443154Z" level=info msg="TearDown network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" successfully" Jan 17 00:25:40.451569 containerd[1603]: time="2026-01-17T00:25:40.451466969Z" level=info msg="StopPodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" returns successfully" Jan 17 00:25:40.454627 containerd[1603]: time="2026-01-17T00:25:40.454526373Z" level=info msg="RemovePodSandbox for \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" Jan 17 00:25:40.454627 containerd[1603]: time="2026-01-17T00:25:40.454559385Z" level=info msg="Forcibly stopping sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\"" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.562 [WARNING][5467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3787079-d3c5-4000-91a5-36b644436b7f", ResourceVersion:"1355", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d8033470a0554b9d289b2ee21b32914daffac4bba28278115b47b887f13f356", Pod:"calico-apiserver-575b9f78b6-fb2xv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89249ecdf61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.562 [INFO][5467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.562 [INFO][5467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" iface="eth0" netns="" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.563 [INFO][5467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.563 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.691 [INFO][5476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.691 [INFO][5476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.692 [INFO][5476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.710 [WARNING][5476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.710 [INFO][5476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" HandleID="k8s-pod-network.0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Workload="localhost-k8s-calico--apiserver--575b9f78b6--fb2xv-eth0" Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.716 [INFO][5476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:40.730417 containerd[1603]: 2026-01-17 00:25:40.721 [INFO][5467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e" Jan 17 00:25:40.730417 containerd[1603]: time="2026-01-17T00:25:40.730283345Z" level=info msg="TearDown network for sandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" successfully" Jan 17 00:25:40.742203 containerd[1603]: time="2026-01-17T00:25:40.741916103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:40.742203 containerd[1603]: time="2026-01-17T00:25:40.741999919Z" level=info msg="RemovePodSandbox \"0c6c5d9c81c5ad6844e4e0ca4b9ba34dc2235ee086e599f74aa4d0b9182f976e\" returns successfully" Jan 17 00:25:40.743568 containerd[1603]: time="2026-01-17T00:25:40.743406321Z" level=info msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.846 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pzcck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdf7dcb1-7f01-49ed-b25d-dd851c91e195", ResourceVersion:"1332", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965", Pod:"csi-node-driver-pzcck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif32be8fd437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.848 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.848 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" iface="eth0" netns="" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.848 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.848 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.931 [INFO][5502] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.932 [INFO][5502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.932 [INFO][5502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.965 [WARNING][5502] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.965 [INFO][5502] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:40.999 [INFO][5502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:41.043480 containerd[1603]: 2026-01-17 00:25:41.023 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.069287 containerd[1603]: time="2026-01-17T00:25:41.048425317Z" level=info msg="TearDown network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" successfully" Jan 17 00:25:41.069287 containerd[1603]: time="2026-01-17T00:25:41.048467456Z" level=info msg="StopPodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" returns successfully" Jan 17 00:25:41.102783 containerd[1603]: time="2026-01-17T00:25:41.102531161Z" level=info msg="RemovePodSandbox for \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" Jan 17 00:25:41.102783 containerd[1603]: time="2026-01-17T00:25:41.102577997Z" level=info msg="Forcibly stopping sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\"" Jan 17 00:25:41.428886 containerd[1603]: time="2026-01-17T00:25:41.428797318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.290 [WARNING][5519] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pzcck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bdf7dcb1-7f01-49ed-b25d-dd851c91e195", ResourceVersion:"1332", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d074a8a16f8b9ee7ba300a10b22fad2f040d59c021eebe2533cd7aeab5297965", Pod:"csi-node-driver-pzcck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif32be8fd437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.290 [INFO][5519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.290 [INFO][5519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" iface="eth0" netns="" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.290 [INFO][5519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.290 [INFO][5519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.404 [INFO][5529] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.404 [INFO][5529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.404 [INFO][5529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.420 [WARNING][5529] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.420 [INFO][5529] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" HandleID="k8s-pod-network.fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Workload="localhost-k8s-csi--node--driver--pzcck-eth0" Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.425 [INFO][5529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:41.451338 containerd[1603]: 2026-01-17 00:25:41.438 [INFO][5519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8" Jan 17 00:25:41.451338 containerd[1603]: time="2026-01-17T00:25:41.448625757Z" level=info msg="TearDown network for sandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" successfully" Jan 17 00:25:41.470548 containerd[1603]: time="2026-01-17T00:25:41.470502147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:41.470721 containerd[1603]: time="2026-01-17T00:25:41.470700126Z" level=info msg="RemovePodSandbox \"fef3296d7fa24efa69ce431aa8b6125bdb4a7ad8d0a9c7580babe3b874c4fdb8\" returns successfully" Jan 17 00:25:41.472886 containerd[1603]: time="2026-01-17T00:25:41.472827088Z" level=info msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" Jan 17 00:25:41.531002 containerd[1603]: time="2026-01-17T00:25:41.527405209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:41.535415 containerd[1603]: time="2026-01-17T00:25:41.535207957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:25:41.535533 containerd[1603]: time="2026-01-17T00:25:41.535443167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:25:41.536618 kubelet[2735]: E0117 00:25:41.536571 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:25:41.537289 kubelet[2735]: E0117 00:25:41.537195 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:25:41.537567 kubelet[2735]: E0117 00:25:41.537512 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:41.541652 containerd[1603]: time="2026-01-17T00:25:41.541565747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:25:41.629153 containerd[1603]: time="2026-01-17T00:25:41.627728221Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:41.645429 containerd[1603]: time="2026-01-17T00:25:41.644680639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:25:41.645663 containerd[1603]: time="2026-01-17T00:25:41.645625363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:25:41.661838 kubelet[2735]: E0117 00:25:41.646002 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:25:41.661838 kubelet[2735]: E0117 00:25:41.661792 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:25:41.670174 kubelet[2735]: E0117 00:25:41.669796 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:41.671090 kubelet[2735]: E0117 00:25:41.671007 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.598 [WARNING][5545] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cd16aa39-f128-48b4-a7b5-ac9f06328314", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45", Pod:"coredns-668d6bf9bc-s8cxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali452605a9df2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.599 [INFO][5545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.599 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" iface="eth0" netns="" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.600 [INFO][5545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.600 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.727 [INFO][5553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.727 [INFO][5553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.727 [INFO][5553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.741 [WARNING][5553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.742 [INFO][5553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.751 [INFO][5553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:41.776999 containerd[1603]: 2026-01-17 00:25:41.757 [INFO][5545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:41.776999 containerd[1603]: time="2026-01-17T00:25:41.776317788Z" level=info msg="TearDown network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" successfully" Jan 17 00:25:41.776999 containerd[1603]: time="2026-01-17T00:25:41.776418396Z" level=info msg="StopPodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" returns successfully" Jan 17 00:25:41.779760 containerd[1603]: time="2026-01-17T00:25:41.779331176Z" level=info msg="RemovePodSandbox for \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" Jan 17 00:25:41.779760 containerd[1603]: time="2026-01-17T00:25:41.779371501Z" level=info msg="Forcibly stopping sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\"" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.901 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cd16aa39-f128-48b4-a7b5-ac9f06328314", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d651c11b270095d3ceb71290fd961f4645dbea8e484550329b31d97aeb5ac45", Pod:"coredns-668d6bf9bc-s8cxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali452605a9df2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.903 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.903 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" iface="eth0" netns="" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.903 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.903 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.980 [INFO][5576] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.981 [INFO][5576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:41.981 [INFO][5576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:42.006 [WARNING][5576] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:42.006 [INFO][5576] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" HandleID="k8s-pod-network.7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Workload="localhost-k8s-coredns--668d6bf9bc--s8cxw-eth0" Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:42.012 [INFO][5576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:42.036738 containerd[1603]: 2026-01-17 00:25:42.021 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af" Jan 17 00:25:42.038211 containerd[1603]: time="2026-01-17T00:25:42.037463986Z" level=info msg="TearDown network for sandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" successfully" Jan 17 00:25:42.082882 containerd[1603]: time="2026-01-17T00:25:42.082769595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:42.083518 containerd[1603]: time="2026-01-17T00:25:42.083114930Z" level=info msg="RemovePodSandbox \"7e24c220c35cfd40785d3b82fd697ff9d1dc2ba3b6687563e06ab315cbdfe6af\" returns successfully" Jan 17 00:25:42.089786 containerd[1603]: time="2026-01-17T00:25:42.089388007Z" level=info msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" Jan 17 00:25:42.433816 containerd[1603]: time="2026-01-17T00:25:42.433666998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.284 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mdmw8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b773dda6-1d12-466d-8ab6-e9b4e6b1277a", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a", Pod:"goldmane-666569f655-mdmw8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6946db907a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.284 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.284 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" iface="eth0" netns="" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.285 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.285 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.392 [INFO][5600] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.393 [INFO][5600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.394 [INFO][5600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.412 [WARNING][5600] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.412 [INFO][5600] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.416 [INFO][5600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:42.436980 containerd[1603]: 2026-01-17 00:25:42.427 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.441783 containerd[1603]: time="2026-01-17T00:25:42.437443232Z" level=info msg="TearDown network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" successfully" Jan 17 00:25:42.441783 containerd[1603]: time="2026-01-17T00:25:42.437665747Z" level=info msg="StopPodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" returns successfully" Jan 17 00:25:42.443747 containerd[1603]: time="2026-01-17T00:25:42.443665964Z" level=info msg="RemovePodSandbox for \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" Jan 17 00:25:42.445391 containerd[1603]: time="2026-01-17T00:25:42.443703815Z" level=info msg="Forcibly stopping sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\"" Jan 17 00:25:42.600887 containerd[1603]: time="2026-01-17T00:25:42.600483726Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:42.604895 containerd[1603]: time="2026-01-17T00:25:42.604282431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:25:42.604895 containerd[1603]: time="2026-01-17T00:25:42.604824102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:42.607184 kubelet[2735]: E0117 00:25:42.605609 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:42.607184 kubelet[2735]: E0117 00:25:42.606577 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:42.615219 kubelet[2735]: E0117 00:25:42.615139 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r9hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:42.617794 kubelet[2735]: E0117 00:25:42.617335 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.617 [WARNING][5615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mdmw8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b773dda6-1d12-466d-8ab6-e9b4e6b1277a", ResourceVersion:"1343", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22b939452ff7e1a31b72db4fcd852f9bc4db042e05403fa63df2f3d524749e6a", Pod:"goldmane-666569f655-mdmw8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6946db907a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.617 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.617 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" iface="eth0" netns="" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.618 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.618 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.712 [INFO][5624] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.713 [INFO][5624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.713 [INFO][5624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.724 [WARNING][5624] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.725 [INFO][5624] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" HandleID="k8s-pod-network.9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Workload="localhost-k8s-goldmane--666569f655--mdmw8-eth0" Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.728 [INFO][5624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:42.751467 containerd[1603]: 2026-01-17 00:25:42.731 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4" Jan 17 00:25:42.751467 containerd[1603]: time="2026-01-17T00:25:42.746943828Z" level=info msg="TearDown network for sandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" successfully" Jan 17 00:25:42.802870 containerd[1603]: time="2026-01-17T00:25:42.801823948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:42.802870 containerd[1603]: time="2026-01-17T00:25:42.801945625Z" level=info msg="RemovePodSandbox \"9e2a0c748ac0d7e31339cc49c2f6b514bab6aa906b3b14b3e176e577803883c4\" returns successfully" Jan 17 00:25:42.809133 containerd[1603]: time="2026-01-17T00:25:42.806782228Z" level=info msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:42.975 [WARNING][5641] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" WorkloadEndpoint="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:42.984 [INFO][5641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:42.986 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" iface="eth0" netns="" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:42.986 [INFO][5641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:42.986 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.103 [INFO][5650] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.103 [INFO][5650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.104 [INFO][5650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.113 [WARNING][5650] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.113 [INFO][5650] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.126 [INFO][5650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:43.145362 containerd[1603]: 2026-01-17 00:25:43.135 [INFO][5641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.145362 containerd[1603]: time="2026-01-17T00:25:43.144701746Z" level=info msg="TearDown network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" successfully" Jan 17 00:25:43.145362 containerd[1603]: time="2026-01-17T00:25:43.144735950Z" level=info msg="StopPodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" returns successfully" Jan 17 00:25:43.152705 containerd[1603]: time="2026-01-17T00:25:43.152436449Z" level=info msg="RemovePodSandbox for \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" Jan 17 00:25:43.152705 containerd[1603]: time="2026-01-17T00:25:43.152478037Z" level=info msg="Forcibly stopping sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\"" Jan 17 00:25:43.308576 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:36052.service - OpenSSH per-connection server daemon (10.0.0.1:36052). Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.297 [WARNING][5667] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" WorkloadEndpoint="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.298 [INFO][5667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.298 [INFO][5667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" iface="eth0" netns="" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.298 [INFO][5667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.298 [INFO][5667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.369 [INFO][5676] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.370 [INFO][5676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.370 [INFO][5676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.383 [WARNING][5676] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.383 [INFO][5676] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" HandleID="k8s-pod-network.a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Workload="localhost-k8s-whisker--854f7549c5--c67l8-eth0" Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.394 [INFO][5676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:43.410148 containerd[1603]: 2026-01-17 00:25:43.404 [INFO][5667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e" Jan 17 00:25:43.410148 containerd[1603]: time="2026-01-17T00:25:43.408663153Z" level=info msg="TearDown network for sandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" successfully" Jan 17 00:25:43.439583 containerd[1603]: time="2026-01-17T00:25:43.439516598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:43.439862 containerd[1603]: time="2026-01-17T00:25:43.439830334Z" level=info msg="RemovePodSandbox \"a2c8b1f18616337650bb2ffd840382f99908598d1c3531bf27d804a8bce8ad7e\" returns successfully" Jan 17 00:25:43.440680 containerd[1603]: time="2026-01-17T00:25:43.440649285Z" level=info msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" Jan 17 00:25:43.444172 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 36052 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:25:43.455154 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:43.476872 systemd-logind[1580]: New session 9 of user core. Jan 17 00:25:43.488429 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.593 [WARNING][5698] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ee4324-8757-4618-9329-530899bfb3f8", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43", Pod:"calico-apiserver-575b9f78b6-2wpqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bafa2e964a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.594 [INFO][5698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.594 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" iface="eth0" netns="" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.597 [INFO][5698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.597 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.666 [INFO][5710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.666 [INFO][5710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.666 [INFO][5710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.704 [WARNING][5710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.704 [INFO][5710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.719 [INFO][5710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:43.729692 containerd[1603]: 2026-01-17 00:25:43.725 [INFO][5698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.730703 containerd[1603]: time="2026-01-17T00:25:43.729728862Z" level=info msg="TearDown network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" successfully" Jan 17 00:25:43.730703 containerd[1603]: time="2026-01-17T00:25:43.729761333Z" level=info msg="StopPodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" returns successfully" Jan 17 00:25:43.731618 containerd[1603]: time="2026-01-17T00:25:43.731549433Z" level=info msg="RemovePodSandbox for \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" Jan 17 00:25:43.731618 containerd[1603]: time="2026-01-17T00:25:43.731607982Z" level=info msg="Forcibly stopping sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\"" Jan 17 00:25:43.807684 sshd[5674]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:43.819184 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:36052.service: Deactivated successfully. Jan 17 00:25:43.830428 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:25:43.831838 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:25:43.837548 systemd-logind[1580]: Removed session 9. Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.877 [WARNING][5734] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0", GenerateName:"calico-apiserver-575b9f78b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"16ee4324-8757-4618-9329-530899bfb3f8", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b9f78b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"221442f2b5f09cf65f2562ba44ced39408ee76a77815f368e8fae17cc4a85a43", Pod:"calico-apiserver-575b9f78b6-2wpqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bafa2e964a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.878 [INFO][5734] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.879 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" iface="eth0" netns="" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.879 [INFO][5734] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.879 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.932 [INFO][5745] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.932 [INFO][5745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.932 [INFO][5745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.946 [WARNING][5745] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.946 [INFO][5745] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" HandleID="k8s-pod-network.6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Workload="localhost-k8s-calico--apiserver--575b9f78b6--2wpqn-eth0" Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.965 [INFO][5745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:43.976964 containerd[1603]: 2026-01-17 00:25:43.973 [INFO][5734] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc" Jan 17 00:25:43.977686 containerd[1603]: time="2026-01-17T00:25:43.977003271Z" level=info msg="TearDown network for sandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" successfully" Jan 17 00:25:43.985601 containerd[1603]: time="2026-01-17T00:25:43.985423337Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:43.985601 containerd[1603]: time="2026-01-17T00:25:43.985507163Z" level=info msg="RemovePodSandbox \"6a17d6fe83522f9a3ea2302a8dbb4bb28c07d2a93ff758bee17e4510afaae1cc\" returns successfully" Jan 17 00:25:43.988149 containerd[1603]: time="2026-01-17T00:25:43.987717256Z" level=info msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.103 [WARNING][5764] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--glqqh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6", Pod:"coredns-668d6bf9bc-glqqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2fa3f8e3e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.103 [INFO][5764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.103 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" iface="eth0" netns="" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.103 [INFO][5764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.103 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.174 [INFO][5773] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.174 [INFO][5773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.174 [INFO][5773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.188 [WARNING][5773] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.188 [INFO][5773] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.192 [INFO][5773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:44.205390 containerd[1603]: 2026-01-17 00:25:44.197 [INFO][5764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.207279 containerd[1603]: time="2026-01-17T00:25:44.206424304Z" level=info msg="TearDown network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" successfully" Jan 17 00:25:44.207279 containerd[1603]: time="2026-01-17T00:25:44.206493934Z" level=info msg="StopPodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" returns successfully" Jan 17 00:25:44.207356 containerd[1603]: time="2026-01-17T00:25:44.207279501Z" level=info msg="RemovePodSandbox for \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" Jan 17 00:25:44.207356 containerd[1603]: time="2026-01-17T00:25:44.207320748Z" level=info msg="Forcibly stopping sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\"" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.306 [WARNING][5789] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--glqqh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"388d00a3-0b80-47d6-9f0e-7b2e6b5cd18c", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a4cc7e18bd4a8656555e5731d0aba09381793dccd9450b2d52ff6c01a8fec6", Pod:"coredns-668d6bf9bc-glqqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2fa3f8e3e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.308 [INFO][5789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.308 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" iface="eth0" netns="" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.308 [INFO][5789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.308 [INFO][5789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.375 [INFO][5798] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.381 [INFO][5798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.381 [INFO][5798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.398 [WARNING][5798] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.398 [INFO][5798] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" HandleID="k8s-pod-network.e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Workload="localhost-k8s-coredns--668d6bf9bc--glqqh-eth0" Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.405 [INFO][5798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:25:44.427669 containerd[1603]: 2026-01-17 00:25:44.416 [INFO][5789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51" Jan 17 00:25:44.427669 containerd[1603]: time="2026-01-17T00:25:44.427530429Z" level=info msg="TearDown network for sandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" successfully" Jan 17 00:25:44.441769 containerd[1603]: time="2026-01-17T00:25:44.441206241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:25:44.441769 containerd[1603]: time="2026-01-17T00:25:44.441547990Z" level=info msg="RemovePodSandbox \"e97313777f1fd3bc9a144e90c045e1366039242cb3fae5d79b3c64f0c2330c51\" returns successfully" Jan 17 00:25:45.444415 containerd[1603]: time="2026-01-17T00:25:45.444205626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:25:45.538280 containerd[1603]: time="2026-01-17T00:25:45.537962858Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:45.540150 containerd[1603]: time="2026-01-17T00:25:45.540104902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:25:45.540690 containerd[1603]: time="2026-01-17T00:25:45.540367663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:45.541095 kubelet[2735]: E0117 00:25:45.541011 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:25:45.543693 kubelet[2735]: E0117 00:25:45.541742 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:25:45.543693 kubelet[2735]: E0117 00:25:45.542201 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmvxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:45.544789 kubelet[2735]: E0117 00:25:45.544756 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:25:47.426732 kubelet[2735]: E0117 00:25:47.426689 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:47.432940 kubelet[2735]: E0117 00:25:47.432441 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:25:48.834744 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:36056.service - OpenSSH per-connection server daemon (10.0.0.1:36056). Jan 17 00:25:49.006684 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 36056 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:25:49.009308 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:49.024967 systemd-logind[1580]: New session 10 of user core. Jan 17 00:25:49.051687 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:25:49.442496 containerd[1603]: time="2026-01-17T00:25:49.442288159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:25:49.546368 containerd[1603]: time="2026-01-17T00:25:49.545912033Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:25:49.559623 containerd[1603]: time="2026-01-17T00:25:49.559571465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:25:49.560329 containerd[1603]: time="2026-01-17T00:25:49.559920296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:25:49.560599 kubelet[2735]: E0117 00:25:49.560556 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:49.577718 kubelet[2735]: E0117 00:25:49.561343 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:25:49.577718 kubelet[2735]: E0117 00:25:49.561505 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9wrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:25:49.579502 kubelet[2735]: E0117 00:25:49.579033 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:25:49.583184 sshd[5805]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:49.600783 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:25:49.605944 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:36056.service: Deactivated successfully. Jan 17 00:25:49.625296 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:25:49.638015 systemd-logind[1580]: Removed session 10. Jan 17 00:25:50.434171 kubelet[2735]: E0117 00:25:50.433210 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:25:54.616963 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:43312.service - OpenSSH per-connection server daemon (10.0.0.1:43312). Jan 17 00:25:54.694333 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 43312 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:25:54.698857 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:25:54.711023 systemd-logind[1580]: New session 11 of user core. Jan 17 00:25:54.722521 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:25:55.037771 sshd[5852]: pam_unix(sshd:session): session closed for user core Jan 17 00:25:55.059492 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:43312.service: Deactivated successfully. Jan 17 00:25:55.069426 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:25:55.072004 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:25:55.074855 systemd-logind[1580]: Removed session 11. Jan 17 00:25:55.985452 update_engine[1588]: I20260117 00:25:55.985212 1588 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:25:55.985452 update_engine[1588]: I20260117 00:25:55.985344 1588 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.000406 1588 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001456 1588 omaha_request_params.cc:62] Current group set to lts Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001614 1588 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001631 1588 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001657 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001713 1588 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001796 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001809 1588 omaha_request_action.cc:272] Request: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: Jan 17 00:25:56.002337 update_engine[1588]: I20260117 00:25:56.001821 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:25:56.016820 update_engine[1588]: I20260117 00:25:56.015950 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:25:56.016820 update_engine[1588]: I20260117 00:25:56.016759 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:25:56.042645 update_engine[1588]: E20260117 00:25:56.042531 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:25:56.043000 update_engine[1588]: I20260117 00:25:56.042676 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:25:56.068371 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:25:56.444543 kubelet[2735]: E0117 00:25:56.440336 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:25:57.445330 kubelet[2735]: E0117 00:25:57.444661 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:25:58.471511 kubelet[2735]: E0117 00:25:58.467771 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:26:00.080015 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:43322.service - OpenSSH per-connection server daemon (10.0.0.1:43322). Jan 17 00:26:00.215802 sshd[5869]: Accepted publickey for core from 10.0.0.1 port 43322 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:00.225545 sshd[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:00.247686 systemd-logind[1580]: New session 12 of user core. Jan 17 00:26:00.272687 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:26:00.431913 kubelet[2735]: E0117 00:26:00.431807 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:26:00.449449 kubelet[2735]: E0117 00:26:00.440939 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:26:00.790924 sshd[5869]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:00.806730 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:43322.service: Deactivated successfully. Jan 17 00:26:00.813576 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:26:00.818296 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:26:00.822357 systemd-logind[1580]: Removed session 12. Jan 17 00:26:02.432051 kubelet[2735]: E0117 00:26:02.431772 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:26:05.813034 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:39888.service - OpenSSH per-connection server daemon (10.0.0.1:39888). Jan 17 00:26:05.898497 update_engine[1588]: I20260117 00:26:05.898306 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:26:05.899727 update_engine[1588]: I20260117 00:26:05.898667 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:26:05.899727 update_engine[1588]: I20260117 00:26:05.898896 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:26:05.919706 update_engine[1588]: E20260117 00:26:05.919443 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:26:05.919706 update_engine[1588]: I20260117 00:26:05.919551 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:26:06.019180 sshd[5885]: Accepted publickey for core from 10.0.0.1 port 39888 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:06.031715 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:06.046550 systemd-logind[1580]: New session 13 of user core. Jan 17 00:26:06.068673 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:26:06.504574 sshd[5885]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:06.514640 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:39888.service: Deactivated successfully. Jan 17 00:26:06.521934 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:26:06.523583 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:26:06.526074 systemd-logind[1580]: Removed session 13. Jan 17 00:26:07.472371 kubelet[2735]: E0117 00:26:07.471763 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:26:08.431285 kubelet[2735]: E0117 00:26:08.426863 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:08.432220 kubelet[2735]: E0117 00:26:08.432181 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:26:09.427195 kubelet[2735]: E0117 00:26:09.426721 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:11.432799 kubelet[2735]: E0117 00:26:11.430928 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:26:11.519572 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:39894.service - OpenSSH per-connection server daemon (10.0.0.1:39894). Jan 17 00:26:11.595687 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 39894 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:11.603798 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:11.630393 systemd-logind[1580]: New session 14 of user core. Jan 17 00:26:11.641099 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:26:11.904740 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:11.913090 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:39894.service: Deactivated successfully. Jan 17 00:26:11.922192 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:26:11.922777 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:26:11.925413 systemd-logind[1580]: Removed session 14. Jan 17 00:26:12.435398 kubelet[2735]: E0117 00:26:12.435300 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:26:13.448412 kubelet[2735]: E0117 00:26:13.447752 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:14.428149 kubelet[2735]: E0117 00:26:14.428030 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:26:15.898112 update_engine[1588]: I20260117 00:26:15.897368 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:26:15.898112 update_engine[1588]: I20260117 00:26:15.897807 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:26:15.902154 update_engine[1588]: I20260117 00:26:15.902016 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:26:15.917726 update_engine[1588]: E20260117 00:26:15.917526 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:26:15.917726 update_engine[1588]: I20260117 00:26:15.917641 1588 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:26:16.427796 kubelet[2735]: E0117 00:26:16.427673 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:16.429768 kubelet[2735]: E0117 00:26:16.429401 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:26:16.929517 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:49206.service - OpenSSH per-connection server daemon (10.0.0.1:49206). Jan 17 00:26:17.012344 sshd[5926]: Accepted publickey for core from 10.0.0.1 port 49206 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:17.017798 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:17.033721 systemd-logind[1580]: New session 15 of user core. Jan 17 00:26:17.046898 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:26:17.434082 sshd[5926]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:17.464876 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:49210.service - OpenSSH per-connection server daemon (10.0.0.1:49210). Jan 17 00:26:17.466011 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:49206.service: Deactivated successfully. Jan 17 00:26:17.471805 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:26:17.477049 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:26:17.478989 systemd-logind[1580]: Removed session 15. Jan 17 00:26:17.550787 sshd[5940]: Accepted publickey for core from 10.0.0.1 port 49210 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:17.576938 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:17.592859 systemd-logind[1580]: New session 16 of user core. Jan 17 00:26:17.609539 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:26:17.995995 sshd[5940]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:18.013161 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:49224.service - OpenSSH per-connection server daemon (10.0.0.1:49224). Jan 17 00:26:18.014025 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:49210.service: Deactivated successfully. Jan 17 00:26:18.026024 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:26:18.026903 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:26:18.068904 systemd-logind[1580]: Removed session 16. Jan 17 00:26:18.206412 sshd[5952]: Accepted publickey for core from 10.0.0.1 port 49224 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:18.212407 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:18.256842 systemd-logind[1580]: New session 17 of user core. Jan 17 00:26:18.272738 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:26:18.530968 sshd[5952]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:18.540840 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:49224.service: Deactivated successfully. Jan 17 00:26:18.547486 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:26:18.549924 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:26:18.566093 systemd-logind[1580]: Removed session 17. Jan 17 00:26:20.443790 kubelet[2735]: E0117 00:26:20.443549 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:26:20.443790 kubelet[2735]: E0117 00:26:20.443669 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:26:21.264518 systemd[1]: run-containerd-runc-k8s.io-3d8833480a9c650a18f2f33aec7782794ba71d29e942c88445211735182545fd-runc.928RKS.mount: Deactivated successfully. Jan 17 00:26:23.431053 containerd[1603]: time="2026-01-17T00:26:23.429336391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:26:23.517343 containerd[1603]: time="2026-01-17T00:26:23.514008732Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:23.523622 containerd[1603]: time="2026-01-17T00:26:23.520555579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:26:23.523622 containerd[1603]: time="2026-01-17T00:26:23.520695450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:26:23.523923 kubelet[2735]: E0117 00:26:23.520879 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:26:23.523923 kubelet[2735]: E0117 00:26:23.520959 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:26:23.523923 kubelet[2735]: E0117 00:26:23.521130 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:23.531310 kubelet[2735]: E0117 00:26:23.525521 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:26:23.581436 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:49364.service - OpenSSH per-connection server daemon (10.0.0.1:49364). Jan 17 00:26:23.745667 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 49364 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:23.747732 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:23.766592 systemd-logind[1580]: New session 18 of user core. Jan 17 00:26:23.784877 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:26:24.014612 sshd[5994]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:24.024102 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:49364.service: Deactivated successfully. Jan 17 00:26:24.037353 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:26:24.045976 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:26:24.050808 systemd-logind[1580]: Removed session 18. Jan 17 00:26:25.430067 kubelet[2735]: E0117 00:26:25.427105 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:26:25.897704 update_engine[1588]: I20260117 00:26:25.896030 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:26:25.897704 update_engine[1588]: I20260117 00:26:25.896948 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:26:25.897704 update_engine[1588]: I20260117 00:26:25.897310 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:26:25.920065 update_engine[1588]: E20260117 00:26:25.919932 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:26:25.920401 update_engine[1588]: I20260117 00:26:25.920070 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:26:25.920401 update_engine[1588]: I20260117 00:26:25.920095 1588 omaha_request_action.cc:617] Omaha request response: Jan 17 00:26:25.921843 update_engine[1588]: E20260117 00:26:25.921756 1588 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:26:25.921843 update_engine[1588]: I20260117 00:26:25.921833 1588 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:26:25.921934 update_engine[1588]: I20260117 00:26:25.921849 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:26:25.921934 update_engine[1588]: I20260117 00:26:25.921862 1588 update_attempter.cc:306] Processing Done. Jan 17 00:26:25.921934 update_engine[1588]: E20260117 00:26:25.921883 1588 update_attempter.cc:619] Update failed. Jan 17 00:26:25.921934 update_engine[1588]: I20260117 00:26:25.921893 1588 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:26:25.921934 update_engine[1588]: I20260117 00:26:25.921902 1588 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:26:25.921934 update_engine[1588]: I20260117 00:26:25.921913 1588 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:26:25.922200 update_engine[1588]: I20260117 00:26:25.922036 1588 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:26:25.922200 update_engine[1588]: I20260117 00:26:25.922071 1588 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:26:25.922200 update_engine[1588]: I20260117 00:26:25.922082 1588 omaha_request_action.cc:272] Request: Jan 17 00:26:25.922200 update_engine[1588]: Jan 17 00:26:25.922200 update_engine[1588]: Jan 17 00:26:25.922200 update_engine[1588]: Jan 17 00:26:25.922200 update_engine[1588]: Jan 17 00:26:25.922200 update_engine[1588]: Jan 17 00:26:25.922200 update_engine[1588]: Jan 17 00:26:25.922200 update_engine[1588]: I20260117 00:26:25.922092 1588 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:26:25.923803 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:26:25.924546 update_engine[1588]: I20260117 00:26:25.923558 1588 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:26:25.925661 update_engine[1588]: I20260117 00:26:25.925591 1588 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:26:25.939977 update_engine[1588]: E20260117 00:26:25.939836 1588 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:26:25.939977 update_engine[1588]: I20260117 00:26:25.939960 1588 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:26:25.939977 update_engine[1588]: I20260117 00:26:25.939976 1588 omaha_request_action.cc:617] Omaha request response: Jan 17 00:26:25.940137 update_engine[1588]: I20260117 00:26:25.939989 1588 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:26:25.940137 update_engine[1588]: I20260117 00:26:25.940001 1588 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:26:25.940137 update_engine[1588]: I20260117 00:26:25.940011 1588 update_attempter.cc:306] Processing Done. Jan 17 00:26:25.940137 update_engine[1588]: I20260117 00:26:25.940083 1588 update_attempter.cc:310] Error event sent. Jan 17 00:26:25.940137 update_engine[1588]: I20260117 00:26:25.940100 1588 update_check_scheduler.cc:74] Next update check in 42m2s Jan 17 00:26:25.944780 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:26:26.433200 containerd[1603]: time="2026-01-17T00:26:26.432920327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:26:26.516341 containerd[1603]: time="2026-01-17T00:26:26.516289281Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:26.519412 containerd[1603]: time="2026-01-17T00:26:26.519201957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:26:26.519412 containerd[1603]: time="2026-01-17T00:26:26.519282927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:26:26.523073 kubelet[2735]: E0117 00:26:26.519558 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:26:26.523073 kubelet[2735]: E0117 00:26:26.519619 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:26:26.523073 kubelet[2735]: E0117 00:26:26.519774 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmvxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:26.523073 kubelet[2735]: E0117 00:26:26.521118 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:26:29.066669 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:49376.service - OpenSSH per-connection server daemon (10.0.0.1:49376). Jan 17 00:26:29.164879 sshd[6023]: Accepted publickey for core from 10.0.0.1 port 49376 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:29.167602 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:29.178961 systemd-logind[1580]: New session 19 of user core. Jan 17 00:26:29.189372 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:26:29.435463 containerd[1603]: time="2026-01-17T00:26:29.435130464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:26:29.483898 sshd[6023]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:29.489437 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:49376.service: Deactivated successfully. Jan 17 00:26:29.500758 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:26:29.503852 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:26:29.507344 systemd-logind[1580]: Removed session 19. Jan 17 00:26:29.516937 containerd[1603]: time="2026-01-17T00:26:29.516796616Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:29.520614 containerd[1603]: time="2026-01-17T00:26:29.520353279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:26:29.520614 containerd[1603]: time="2026-01-17T00:26:29.520463273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:26:29.521506 kubelet[2735]: E0117 00:26:29.521392 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:26:29.522836 kubelet[2735]: E0117 00:26:29.522511 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:26:29.524306 kubelet[2735]: E0117 00:26:29.524069 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f46c655c0c40418aada782a2b06c3fb5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:29.526727 containerd[1603]: time="2026-01-17T00:26:29.526497785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:26:29.637549 containerd[1603]: time="2026-01-17T00:26:29.635497730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:29.641699 containerd[1603]: time="2026-01-17T00:26:29.641474375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:26:29.641699 containerd[1603]: time="2026-01-17T00:26:29.641565142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:26:29.642289 kubelet[2735]: E0117 00:26:29.642076 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:26:29.642289 kubelet[2735]: E0117 00:26:29.642201 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:26:29.642490 kubelet[2735]: E0117 00:26:29.642432 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:29.645379 kubelet[2735]: E0117 00:26:29.645199 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:26:34.429289 kubelet[2735]: E0117 00:26:34.429050 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:34.434799 containerd[1603]: time="2026-01-17T00:26:34.434716385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:26:34.505260 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:39404.service - OpenSSH per-connection server daemon (10.0.0.1:39404). Jan 17 00:26:34.523706 containerd[1603]: time="2026-01-17T00:26:34.523448987Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:34.527330 containerd[1603]: time="2026-01-17T00:26:34.525552546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:26:34.527330 containerd[1603]: time="2026-01-17T00:26:34.525669635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:26:34.527470 kubelet[2735]: E0117 00:26:34.525825 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:26:34.527470 kubelet[2735]: E0117 00:26:34.525883 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:26:34.527470 kubelet[2735]: E0117 00:26:34.526013 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:34.530555 containerd[1603]: time="2026-01-17T00:26:34.530112764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:26:34.565459 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 39404 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:34.570579 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:34.586917 systemd-logind[1580]: New session 20 of user core. Jan 17 00:26:34.595900 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:26:34.611005 containerd[1603]: time="2026-01-17T00:26:34.610905096Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:34.615435 containerd[1603]: time="2026-01-17T00:26:34.614883577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:26:34.615435 containerd[1603]: time="2026-01-17T00:26:34.615360357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:26:34.616990 kubelet[2735]: E0117 00:26:34.615857 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:26:34.616990 kubelet[2735]: E0117 00:26:34.615969 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:26:34.616990 kubelet[2735]: E0117 00:26:34.616116 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:34.617865 kubelet[2735]: E0117 00:26:34.617751 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:26:34.840740 sshd[6045]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:34.851819 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:39404.service: Deactivated successfully. Jan 17 00:26:34.879908 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:26:34.889438 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:26:34.896610 systemd-logind[1580]: Removed session 20. Jan 17 00:26:35.431951 kubelet[2735]: E0117 00:26:35.431756 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:26:35.436809 containerd[1603]: time="2026-01-17T00:26:35.432662387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:26:35.512204 containerd[1603]: time="2026-01-17T00:26:35.511986978Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:35.514143 containerd[1603]: time="2026-01-17T00:26:35.514106447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:26:35.514473 containerd[1603]: time="2026-01-17T00:26:35.514356794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:26:35.520402 kubelet[2735]: E0117 00:26:35.519460 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:26:35.520402 kubelet[2735]: E0117 00:26:35.519837 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:26:35.521590 kubelet[2735]: E0117 00:26:35.521445 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r9hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:35.523432 kubelet[2735]: E0117 00:26:35.523403 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:26:36.428038 containerd[1603]: time="2026-01-17T00:26:36.427995370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:26:36.525754 containerd[1603]: time="2026-01-17T00:26:36.525583701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:26:36.529308 containerd[1603]: time="2026-01-17T00:26:36.529114686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:26:36.529308 containerd[1603]: time="2026-01-17T00:26:36.529309971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:26:36.531783 kubelet[2735]: E0117 00:26:36.529653 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:26:36.531783 kubelet[2735]: E0117 00:26:36.529748 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:26:36.531783 kubelet[2735]: E0117 00:26:36.529919 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9wrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:26:36.531783 kubelet[2735]: E0117 00:26:36.531429 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:26:38.427153 kubelet[2735]: E0117 00:26:38.425754 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:39.862633 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:39406.service - OpenSSH per-connection server daemon (10.0.0.1:39406). Jan 17 00:26:40.022296 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:40.025713 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:40.068331 systemd-logind[1580]: New session 21 of user core. Jan 17 00:26:40.092128 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:26:40.419114 sshd[6063]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:40.429557 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:39406.service: Deactivated successfully. Jan 17 00:26:40.434061 kubelet[2735]: E0117 00:26:40.433763 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:40.439131 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:26:40.444750 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:26:40.461725 systemd-logind[1580]: Removed session 21. Jan 17 00:26:41.428607 kubelet[2735]: E0117 00:26:41.428164 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:26:44.496309 kubelet[2735]: E0117 00:26:44.495536 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:26:45.446167 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:43916.service - OpenSSH per-connection server daemon (10.0.0.1:43916). Jan 17 00:26:45.578737 sshd[6081]: Accepted publickey for core from 10.0.0.1 port 43916 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:45.586358 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:45.606445 systemd-logind[1580]: New session 22 of user core. Jan 17 00:26:45.615141 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:26:45.934188 sshd[6081]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:45.944541 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:43916.service: Deactivated successfully. Jan 17 00:26:45.966102 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:26:45.966173 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:26:45.974677 systemd-logind[1580]: Removed session 22. Jan 17 00:26:46.441071 kubelet[2735]: E0117 00:26:46.440410 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:26:48.431619 kubelet[2735]: E0117 00:26:48.431547 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:26:48.434803 kubelet[2735]: E0117 00:26:48.432660 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:26:49.441032 kubelet[2735]: E0117 00:26:49.440577 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:26:50.975777 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:43924.service - OpenSSH per-connection server daemon (10.0.0.1:43924). Jan 17 00:26:51.058194 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 43924 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:51.065398 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:51.085163 systemd-logind[1580]: New session 23 of user core. Jan 17 00:26:51.099859 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:26:51.334989 systemd[1]: run-containerd-runc-k8s.io-3d8833480a9c650a18f2f33aec7782794ba71d29e942c88445211735182545fd-runc.ZyySTS.mount: Deactivated successfully. Jan 17 00:26:51.533426 sshd[6096]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:51.546026 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:43924.service: Deactivated successfully. Jan 17 00:26:51.546647 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:26:51.560917 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:26:51.568098 systemd-logind[1580]: Removed session 23. Jan 17 00:26:55.438393 kubelet[2735]: E0117 00:26:55.437668 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:26:56.557896 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:37618.service - OpenSSH per-connection server daemon (10.0.0.1:37618). Jan 17 00:26:56.773964 sshd[6138]: Accepted publickey for core from 10.0.0.1 port 37618 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:26:56.779567 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:56.793748 systemd-logind[1580]: New session 24 of user core. Jan 17 00:26:56.802359 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:26:57.095877 sshd[6138]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:57.112393 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:37618.service: Deactivated successfully. Jan 17 00:26:57.117868 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:26:57.119354 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:26:57.121562 systemd-logind[1580]: Removed session 24. Jan 17 00:26:57.439628 kubelet[2735]: E0117 00:26:57.439572 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:26:58.437944 kubelet[2735]: E0117 00:26:58.436444 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:27:00.436124 kubelet[2735]: E0117 00:27:00.435654 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:27:00.438200 kubelet[2735]: E0117 00:27:00.437669 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:27:00.439699 kubelet[2735]: E0117 00:27:00.439457 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:27:02.115613 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Jan 17 00:27:02.332595 sshd[6155]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:02.338326 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:02.372761 systemd-logind[1580]: New session 25 of user core. Jan 17 00:27:02.384103 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:27:02.674659 sshd[6155]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:02.686484 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:37624.service: Deactivated successfully. Jan 17 00:27:02.690194 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:27:02.691789 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:27:02.694401 systemd-logind[1580]: Removed session 25. Jan 17 00:27:04.429074 kubelet[2735]: E0117 00:27:04.426102 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:07.694620 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:33318.service - OpenSSH per-connection server daemon (10.0.0.1:33318). Jan 17 00:27:07.812790 sshd[6171]: Accepted publickey for core from 10.0.0.1 port 33318 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:07.816607 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:07.856023 systemd-logind[1580]: New session 26 of user core. Jan 17 00:27:07.899713 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:27:08.190362 sshd[6171]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:08.204167 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:33318.service: Deactivated successfully. Jan 17 00:27:08.214131 systemd-logind[1580]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:27:08.215498 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:27:08.224038 systemd-logind[1580]: Removed session 26. Jan 17 00:27:08.428093 kubelet[2735]: E0117 00:27:08.427686 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:27:09.446512 kubelet[2735]: E0117 00:27:09.442576 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:27:10.428336 kubelet[2735]: E0117 00:27:10.428193 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:27:11.435872 kubelet[2735]: E0117 00:27:11.435824 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:27:11.447729 kubelet[2735]: E0117 00:27:11.438871 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:27:13.211827 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:40730.service - OpenSSH per-connection server daemon (10.0.0.1:40730). Jan 17 00:27:13.280414 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 40730 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:13.284620 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:13.312113 systemd-logind[1580]: New session 27 of user core. Jan 17 00:27:13.322581 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:27:13.429007 kubelet[2735]: E0117 00:27:13.427709 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:27:13.603355 sshd[6188]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:13.637857 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:40730.service: Deactivated successfully. Jan 17 00:27:13.649728 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:27:13.672153 systemd-logind[1580]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:27:13.680710 systemd-logind[1580]: Removed session 27. Jan 17 00:27:15.428130 kubelet[2735]: E0117 00:27:15.428047 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:18.430109 kubelet[2735]: E0117 00:27:18.430017 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:18.622538 systemd[1]: Started sshd@27-10.0.0.48:22-10.0.0.1:40742.service - OpenSSH per-connection server daemon (10.0.0.1:40742). Jan 17 00:27:18.681883 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 40742 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:18.688043 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:18.698131 systemd-logind[1580]: New session 28 of user core. Jan 17 00:27:18.704729 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:27:18.889421 sshd[6206]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:18.905884 systemd[1]: Started sshd@28-10.0.0.48:22-10.0.0.1:40758.service - OpenSSH per-connection server daemon (10.0.0.1:40758). Jan 17 00:27:18.906651 systemd[1]: sshd@27-10.0.0.48:22-10.0.0.1:40742.service: Deactivated successfully. Jan 17 00:27:18.911904 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:27:18.916975 systemd-logind[1580]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:27:18.919861 systemd-logind[1580]: Removed session 28. Jan 17 00:27:19.000156 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 40758 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:19.009429 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:19.023946 systemd-logind[1580]: New session 29 of user core. Jan 17 00:27:19.029673 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:27:19.673758 sshd[6219]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:19.692179 systemd[1]: Started sshd@29-10.0.0.48:22-10.0.0.1:40768.service - OpenSSH per-connection server daemon (10.0.0.1:40768). Jan 17 00:27:19.694037 systemd[1]: sshd@28-10.0.0.48:22-10.0.0.1:40758.service: Deactivated successfully. Jan 17 00:27:19.700637 systemd-logind[1580]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:27:19.705732 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:27:19.711577 systemd-logind[1580]: Removed session 29. Jan 17 00:27:19.800329 sshd[6233]: Accepted publickey for core from 10.0.0.1 port 40768 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:19.804640 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:19.820141 systemd-logind[1580]: New session 30 of user core. Jan 17 00:27:19.829953 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:27:20.835349 sshd[6233]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:20.854777 systemd[1]: Started sshd@30-10.0.0.48:22-10.0.0.1:40772.service - OpenSSH per-connection server daemon (10.0.0.1:40772). Jan 17 00:27:20.855999 systemd[1]: sshd@29-10.0.0.48:22-10.0.0.1:40768.service: Deactivated successfully. Jan 17 00:27:20.866204 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:27:20.869807 systemd-logind[1580]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:27:20.873877 systemd-logind[1580]: Removed session 30. Jan 17 00:27:20.940618 sshd[6257]: Accepted publickey for core from 10.0.0.1 port 40772 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:20.947611 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:20.972668 systemd-logind[1580]: New session 31 of user core. Jan 17 00:27:20.984745 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:27:21.429854 kubelet[2735]: E0117 00:27:21.429770 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:27:21.433910 kubelet[2735]: E0117 00:27:21.433836 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:27:21.455038 sshd[6257]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:21.468189 systemd[1]: Started sshd@31-10.0.0.48:22-10.0.0.1:40784.service - OpenSSH per-connection server daemon (10.0.0.1:40784). Jan 17 00:27:21.469674 systemd[1]: sshd@30-10.0.0.48:22-10.0.0.1:40772.service: Deactivated successfully. Jan 17 00:27:21.476182 systemd-logind[1580]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:27:21.483453 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:27:21.485404 systemd-logind[1580]: Removed session 31. Jan 17 00:27:21.592148 sshd[6295]: Accepted publickey for core from 10.0.0.1 port 40784 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:21.597967 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:21.609898 systemd-logind[1580]: New session 32 of user core. Jan 17 00:27:21.627860 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:27:21.826590 sshd[6295]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:21.835047 systemd[1]: sshd@31-10.0.0.48:22-10.0.0.1:40784.service: Deactivated successfully. Jan 17 00:27:21.841596 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:27:21.846012 systemd-logind[1580]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:27:21.848160 systemd-logind[1580]: Removed session 32. Jan 17 00:27:22.427396 kubelet[2735]: E0117 00:27:22.427335 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:27:22.428649 kubelet[2735]: E0117 00:27:22.428161 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:27:22.433602 kubelet[2735]: E0117 00:27:22.432142 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:27:23.428834 kubelet[2735]: E0117 00:27:23.427657 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:25.446811 kubelet[2735]: E0117 00:27:25.446401 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:26.880702 systemd[1]: Started sshd@32-10.0.0.48:22-10.0.0.1:53776.service - OpenSSH per-connection server daemon (10.0.0.1:53776). Jan 17 00:27:26.974144 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 53776 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:26.981909 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:26.996022 systemd-logind[1580]: New session 33 of user core. Jan 17 00:27:27.001684 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:27:27.278649 sshd[6313]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:27.285763 systemd[1]: sshd@32-10.0.0.48:22-10.0.0.1:53776.service: Deactivated successfully. Jan 17 00:27:27.290562 systemd-logind[1580]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:27:27.293561 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:27:27.296220 systemd-logind[1580]: Removed session 33. Jan 17 00:27:28.432194 kubelet[2735]: E0117 00:27:28.431724 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:27:32.406601 systemd[1]: Started sshd@33-10.0.0.48:22-10.0.0.1:53780.service - OpenSSH per-connection server daemon (10.0.0.1:53780). Jan 17 00:27:32.517292 sshd[6328]: Accepted publickey for core from 10.0.0.1 port 53780 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:32.518566 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:32.535198 systemd-logind[1580]: New session 34 of user core. Jan 17 00:27:32.546177 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 17 00:27:32.838045 sshd[6328]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:32.866888 systemd[1]: sshd@33-10.0.0.48:22-10.0.0.1:53780.service: Deactivated successfully. Jan 17 00:27:32.881430 systemd[1]: session-34.scope: Deactivated successfully. Jan 17 00:27:32.887044 systemd-logind[1580]: Session 34 logged out. Waiting for processes to exit. Jan 17 00:27:32.893975 systemd-logind[1580]: Removed session 34. Jan 17 00:27:33.434788 kubelet[2735]: E0117 00:27:33.432864 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:27:33.444726 kubelet[2735]: E0117 00:27:33.442815 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:27:33.445711 kubelet[2735]: E0117 00:27:33.444875 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:27:34.430839 kubelet[2735]: E0117 00:27:34.428595 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:27:36.439730 kubelet[2735]: E0117 00:27:36.437564 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:27:37.846890 systemd[1]: Started sshd@34-10.0.0.48:22-10.0.0.1:46172.service - OpenSSH per-connection server daemon (10.0.0.1:46172). Jan 17 00:27:37.979399 sshd[6350]: Accepted publickey for core from 10.0.0.1 port 46172 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:37.989897 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:38.006422 systemd-logind[1580]: New session 35 of user core. Jan 17 00:27:38.018744 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 17 00:27:38.314326 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:38.323892 systemd[1]: sshd@34-10.0.0.48:22-10.0.0.1:46172.service: Deactivated successfully. Jan 17 00:27:38.332916 systemd[1]: session-35.scope: Deactivated successfully. Jan 17 00:27:38.334108 systemd-logind[1580]: Session 35 logged out. Waiting for processes to exit. Jan 17 00:27:38.341325 systemd-logind[1580]: Removed session 35. Jan 17 00:27:39.435589 kubelet[2735]: E0117 00:27:39.433766 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:27:43.364730 systemd[1]: Started sshd@35-10.0.0.48:22-10.0.0.1:42250.service - OpenSSH per-connection server daemon (10.0.0.1:42250). Jan 17 00:27:43.522884 sshd[6367]: Accepted publickey for core from 10.0.0.1 port 42250 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:43.532914 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:43.583487 systemd-logind[1580]: New session 36 of user core. Jan 17 00:27:43.617921 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 17 00:27:44.062765 sshd[6367]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:44.080419 systemd[1]: sshd@35-10.0.0.48:22-10.0.0.1:42250.service: Deactivated successfully. Jan 17 00:27:44.084896 systemd[1]: session-36.scope: Deactivated successfully. Jan 17 00:27:44.091773 systemd-logind[1580]: Session 36 logged out. Waiting for processes to exit. Jan 17 00:27:44.093406 systemd-logind[1580]: Removed session 36. Jan 17 00:27:45.434424 kubelet[2735]: E0117 00:27:45.429207 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:27:45.440516 kubelet[2735]: E0117 00:27:45.438858 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:27:45.447674 kubelet[2735]: E0117 00:27:45.439347 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:27:48.438642 kubelet[2735]: E0117 00:27:48.437676 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:27:48.446695 containerd[1603]: time="2026-01-17T00:27:48.444141024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:27:48.579333 containerd[1603]: time="2026-01-17T00:27:48.577301218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:48.597840 containerd[1603]: time="2026-01-17T00:27:48.597660717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:27:48.597840 containerd[1603]: time="2026-01-17T00:27:48.597832749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:48.602546 kubelet[2735]: E0117 00:27:48.599894 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:48.602546 kubelet[2735]: E0117 00:27:48.599990 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:27:48.602546 kubelet[2735]: E0117 00:27:48.600285 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c64f7b875-k79d8_calico-system(7904b8c1-aed5-4856-a748-a81b4e03c215): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:48.602944 kubelet[2735]: E0117 00:27:48.602611 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:27:49.090561 systemd[1]: Started sshd@36-10.0.0.48:22-10.0.0.1:42264.service - OpenSSH per-connection server daemon (10.0.0.1:42264). Jan 17 00:27:49.220184 sshd[6389]: Accepted publickey for core from 10.0.0.1 port 42264 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:49.229168 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:49.265055 systemd-logind[1580]: New session 37 of user core. Jan 17 00:27:49.284871 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 17 00:27:49.437175 kubelet[2735]: E0117 00:27:49.432929 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:49.690015 sshd[6389]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:49.703673 systemd[1]: sshd@36-10.0.0.48:22-10.0.0.1:42264.service: Deactivated successfully. Jan 17 00:27:49.720896 systemd-logind[1580]: Session 37 logged out. Waiting for processes to exit. Jan 17 00:27:49.722030 systemd[1]: session-37.scope: Deactivated successfully. Jan 17 00:27:49.730118 systemd-logind[1580]: Removed session 37. Jan 17 00:27:51.253611 systemd[1]: run-containerd-runc-k8s.io-3d8833480a9c650a18f2f33aec7782794ba71d29e942c88445211735182545fd-runc.g7RDBN.mount: Deactivated successfully. Jan 17 00:27:51.427116 kubelet[2735]: E0117 00:27:51.426200 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:54.449346 kubelet[2735]: E0117 00:27:54.446096 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:27:54.718457 systemd[1]: Started sshd@37-10.0.0.48:22-10.0.0.1:43424.service - OpenSSH per-connection server daemon (10.0.0.1:43424). Jan 17 00:27:54.823738 sshd[6428]: Accepted publickey for core from 10.0.0.1 port 43424 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:27:54.826825 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:54.835876 systemd-logind[1580]: New session 38 of user core. Jan 17 00:27:54.847431 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 17 00:27:55.112849 sshd[6428]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:55.131204 systemd[1]: sshd@37-10.0.0.48:22-10.0.0.1:43424.service: Deactivated successfully. Jan 17 00:27:55.136116 systemd-logind[1580]: Session 38 logged out. Waiting for processes to exit. Jan 17 00:27:55.136620 systemd[1]: session-38.scope: Deactivated successfully. Jan 17 00:27:55.139567 systemd-logind[1580]: Removed session 38. Jan 17 00:27:56.434302 containerd[1603]: time="2026-01-17T00:27:56.432810265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:27:56.510558 containerd[1603]: time="2026-01-17T00:27:56.509604993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:56.523305 containerd[1603]: time="2026-01-17T00:27:56.521006797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:27:56.523305 containerd[1603]: time="2026-01-17T00:27:56.521074431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:27:56.523305 containerd[1603]: time="2026-01-17T00:27:56.522365635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:27:56.523617 kubelet[2735]: E0117 00:27:56.521379 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:56.523617 kubelet[2735]: E0117 00:27:56.521485 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:27:56.523617 kubelet[2735]: E0117 00:27:56.521748 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f46c655c0c40418aada782a2b06c3fb5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:56.621906 containerd[1603]: time="2026-01-17T00:27:56.620360471Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:56.634017 containerd[1603]: time="2026-01-17T00:27:56.632165470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:27:56.634181 containerd[1603]: time="2026-01-17T00:27:56.632474675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:27:56.634375 kubelet[2735]: E0117 00:27:56.634313 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:56.635437 kubelet[2735]: E0117 00:27:56.634536 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:27:56.635437 kubelet[2735]: E0117 00:27:56.634828 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:56.635700 containerd[1603]: time="2026-01-17T00:27:56.635516270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:27:56.744433 containerd[1603]: time="2026-01-17T00:27:56.743964360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:56.764654 containerd[1603]: time="2026-01-17T00:27:56.764529207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:27:56.764654 containerd[1603]: time="2026-01-17T00:27:56.764592145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:27:56.764890 kubelet[2735]: E0117 00:27:56.764836 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:56.764949 kubelet[2735]: E0117 00:27:56.764893 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:27:56.766430 kubelet[2735]: E0117 00:27:56.765146 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmvxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mdmw8_calico-system(b773dda6-1d12-466d-8ab6-e9b4e6b1277a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:56.770042 containerd[1603]: time="2026-01-17T00:27:56.769631015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:27:56.771363 kubelet[2735]: E0117 00:27:56.771184 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a" Jan 17 00:27:56.850885 containerd[1603]: time="2026-01-17T00:27:56.850693935Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:56.870507 containerd[1603]: time="2026-01-17T00:27:56.868891893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:27:56.870507 containerd[1603]: time="2026-01-17T00:27:56.869028697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:27:56.870761 kubelet[2735]: E0117 00:27:56.869637 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:56.870761 kubelet[2735]: E0117 00:27:56.869710 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:27:56.870761 kubelet[2735]: E0117 00:27:56.869962 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shpgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5689644567-q7l8h_calico-system(089f642f-ff29-4db0-ba9f-a6e7ff0183de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:56.872913 containerd[1603]: time="2026-01-17T00:27:56.872803252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:27:56.879684 kubelet[2735]: E0117 00:27:56.879533 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5689644567-q7l8h" podUID="089f642f-ff29-4db0-ba9f-a6e7ff0183de" Jan 17 00:27:56.996749 containerd[1603]: time="2026-01-17T00:27:56.995664573Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:27:57.000353 containerd[1603]: time="2026-01-17T00:27:57.000295530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:27:57.000619 containerd[1603]: time="2026-01-17T00:27:57.000312454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:27:57.000764 kubelet[2735]: E0117 00:27:57.000707 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:57.000900 kubelet[2735]: E0117 00:27:57.000782 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:27:57.001019 kubelet[2735]: E0117 00:27:57.000929 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tf4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pzcck_calico-system(bdf7dcb1-7f01-49ed-b25d-dd851c91e195): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:27:57.002651 kubelet[2735]: E0117 00:27:57.002542 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pzcck" podUID="bdf7dcb1-7f01-49ed-b25d-dd851c91e195" Jan 17 00:28:00.132804 systemd[1]: Started sshd@38-10.0.0.48:22-10.0.0.1:43436.service - OpenSSH per-connection server daemon (10.0.0.1:43436). Jan 17 00:28:00.254735 sshd[6457]: Accepted publickey for core from 10.0.0.1 port 43436 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:00.259154 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:00.288016 systemd-logind[1580]: New session 39 of user core. Jan 17 00:28:00.309933 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 17 00:28:00.448256 kubelet[2735]: E0117 00:28:00.447554 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c64f7b875-k79d8" podUID="7904b8c1-aed5-4856-a748-a81b4e03c215" Jan 17 00:28:00.720353 sshd[6457]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:00.729809 systemd[1]: sshd@38-10.0.0.48:22-10.0.0.1:43436.service: Deactivated successfully. Jan 17 00:28:00.737037 systemd[1]: session-39.scope: Deactivated successfully. Jan 17 00:28:00.744570 systemd-logind[1580]: Session 39 logged out. Waiting for processes to exit. Jan 17 00:28:00.747586 systemd-logind[1580]: Removed session 39. Jan 17 00:28:01.436000 containerd[1603]: time="2026-01-17T00:28:01.435615359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:01.568838 containerd[1603]: time="2026-01-17T00:28:01.568763897Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:01.572677 containerd[1603]: time="2026-01-17T00:28:01.572556416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:01.572803 containerd[1603]: time="2026-01-17T00:28:01.572663666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:01.573172 kubelet[2735]: E0117 00:28:01.573009 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:01.573172 kubelet[2735]: E0117 00:28:01.573073 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:01.573770 kubelet[2735]: E0117 00:28:01.573212 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r9hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-2wpqn_calico-apiserver(16ee4324-8757-4618-9329-530899bfb3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:01.575028 kubelet[2735]: E0117 00:28:01.574386 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-2wpqn" podUID="16ee4324-8757-4618-9329-530899bfb3f8" Jan 17 00:28:03.439890 kubelet[2735]: E0117 00:28:03.429035 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:05.445709 containerd[1603]: time="2026-01-17T00:28:05.444329462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:05.613300 containerd[1603]: time="2026-01-17T00:28:05.612704347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:05.632881 containerd[1603]: time="2026-01-17T00:28:05.616055095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:05.632881 containerd[1603]: time="2026-01-17T00:28:05.616208992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:05.633046 kubelet[2735]: E0117 00:28:05.616652 2735 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:05.633046 kubelet[2735]: E0117 00:28:05.616714 2735 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:05.633046 kubelet[2735]: E0117 00:28:05.616861 2735 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9wrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-575b9f78b6-fb2xv_calico-apiserver(e3787079-d3c5-4000-91a5-36b644436b7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:05.633046 kubelet[2735]: E0117 00:28:05.621097 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-575b9f78b6-fb2xv" podUID="e3787079-d3c5-4000-91a5-36b644436b7f" Jan 17 00:28:05.784161 systemd[1]: Started sshd@39-10.0.0.48:22-10.0.0.1:54562.service - OpenSSH per-connection server daemon (10.0.0.1:54562). Jan 17 00:28:06.002317 sshd[6481]: Accepted publickey for core from 10.0.0.1 port 54562 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:06.013760 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:06.206910 systemd-logind[1580]: New session 40 of user core. Jan 17 00:28:06.246958 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 17 00:28:06.806047 sshd[6481]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:06.826307 systemd-logind[1580]: Session 40 logged out. Waiting for processes to exit. Jan 17 00:28:06.829993 systemd[1]: sshd@39-10.0.0.48:22-10.0.0.1:54562.service: Deactivated successfully. Jan 17 00:28:06.850160 systemd[1]: session-40.scope: Deactivated successfully. Jan 17 00:28:06.889992 systemd-logind[1580]: Removed session 40. Jan 17 00:28:07.448705 kubelet[2735]: E0117 00:28:07.445987 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mdmw8" podUID="b773dda6-1d12-466d-8ab6-e9b4e6b1277a"