Jan 28 01:24:34.530256 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 01:24:34.530427 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:24:34.530447 kernel: BIOS-provided physical RAM map: Jan 28 01:24:34.530456 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:24:34.530464 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:24:34.530472 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:24:34.530482 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:24:34.530490 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:24:34.530499 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 28 01:24:34.530509 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 28 01:24:34.534501 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 28 01:24:34.534518 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 28 01:24:34.534750 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 28 01:24:34.534772 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 28 01:24:34.534863 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 28 01:24:34.534884 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:24:34.534907 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 28 01:24:34.534916 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 28 01:24:34.534925 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:24:34.534934 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:24:34.534945 kernel: NX (Execute Disable) protection: active Jan 28 01:24:34.534956 kernel: APIC: Static calls initialized Jan 28 01:24:34.534966 kernel: efi: EFI v2.7 by EDK II Jan 28 01:24:34.534976 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 28 01:24:34.534985 kernel: SMBIOS 2.8 present. Jan 28 01:24:34.534996 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 28 01:24:34.535008 kernel: Hypervisor detected: KVM Jan 28 01:24:34.535025 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:24:34.535036 kernel: kvm-clock: using sched offset of 27225287650 cycles Jan 28 01:24:34.535048 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:24:34.535059 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:24:34.535070 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:24:34.535080 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:24:34.535091 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 28 01:24:34.535102 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:24:34.535111 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:24:34.535181 kernel: Using GB pages for direct mapping Jan 28 01:24:34.535192 kernel: Secure boot disabled Jan 28 01:24:34.535203 kernel: ACPI: Early table checksum verification disabled Jan 28 01:24:34.535215 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 01:24:34.535235 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 01:24:34.535248 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:24:34.535259 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:24:34.535278 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 01:24:34.535288 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:24:34.535350 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:24:34.535365 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:24:34.535375 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:24:34.535388 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 01:24:34.535398 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 01:24:34.535418 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 01:24:34.535431 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 01:24:34.535440 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 01:24:34.535453 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 01:24:34.535465 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 01:24:34.535476 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 01:24:34.535488 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 01:24:34.535500 kernel: No NUMA configuration found Jan 28 01:24:34.535557 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 28 01:24:34.535581 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 28 01:24:34.535678 kernel: Zone ranges: Jan 28 01:24:34.535694 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:24:34.535707 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 28 01:24:34.535718 kernel: Normal empty Jan 28 01:24:34.535730 kernel: Movable zone start for each node Jan 28 01:24:34.535741 kernel: Early memory node ranges Jan 28 01:24:34.535753 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:24:34.535764 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 01:24:34.535783 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 01:24:34.535794 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 28 01:24:34.535807 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 28 01:24:34.535817 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 28 01:24:34.535889 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 28 01:24:34.535904 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:24:34.535915 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:24:34.535927 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 01:24:34.535938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:24:34.535956 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 28 01:24:34.535968 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 28 01:24:34.535980 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 28 01:24:34.535991 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:24:34.536000 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:24:34.536011 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:24:34.536023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:24:34.536035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:24:34.536044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:24:34.536062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:24:34.536074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:24:34.536084 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:24:34.536096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:24:34.536108 kernel: TSC deadline timer available Jan 28 01:24:34.536176 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 01:24:34.536192 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:24:34.536202 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:24:34.536213 kernel: kvm-guest: setup PV sched yield Jan 28 01:24:34.536230 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 28 01:24:34.536243 kernel: Booting paravirtualized kernel on KVM Jan 28 01:24:34.536256 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:24:34.536266 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:24:34.536278 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 01:24:34.536289 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 01:24:34.536300 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:24:34.536312 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:24:34.536323 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:24:34.536342 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:24:34.536398 kernel: random: crng init done Jan 28 01:24:34.536412 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:24:34.536424 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:24:34.536436 kernel: Fallback order for Node 0: 0 Jan 28 01:24:34.536448 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 28 01:24:34.536457 kernel: Policy zone: DMA32 Jan 28 01:24:34.536469 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:24:34.536481 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 28 01:24:34.536499 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:24:34.536511 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 01:24:34.536521 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 01:24:34.536532 kernel: Dynamic Preempt: voluntary Jan 28 01:24:34.536545 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:24:34.536574 kernel: rcu: RCU event tracing is enabled. Jan 28 01:24:34.536686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:24:34.536704 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:24:34.536715 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:24:34.536728 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:24:34.536740 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:24:34.536759 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:24:34.536772 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:24:34.536784 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:24:34.536795 kernel: Console: colour dummy device 80x25 Jan 28 01:24:34.536807 kernel: printk: console [ttyS0] enabled Jan 28 01:24:34.536869 kernel: ACPI: Core revision 20230628 Jan 28 01:24:34.536882 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:24:34.536892 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:24:34.536903 kernel: x2apic enabled Jan 28 01:24:34.536913 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:24:34.536924 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:24:34.536934 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:24:34.536945 kernel: kvm-guest: setup PV IPIs Jan 28 01:24:34.536956 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:24:34.536970 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 01:24:34.536981 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:24:34.536991 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:24:34.537002 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:24:34.537012 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:24:34.537023 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:24:34.537035 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:24:34.537047 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:24:34.537057 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:24:34.537074 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:24:34.537088 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:24:34.537100 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:24:34.537112 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:24:34.540289 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:24:34.540353 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:24:34.540366 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:24:34.540377 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:24:34.540394 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:24:34.540404 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:24:34.540414 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:24:34.540425 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:24:34.540435 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:24:34.540445 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:24:34.540455 kernel: landlock: Up and running. Jan 28 01:24:34.540465 kernel: SELinux: Initializing. Jan 28 01:24:34.540475 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:24:34.540490 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:24:34.540500 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:24:34.540510 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:24:34.540520 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:24:34.540530 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:24:34.540540 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:24:34.540550 kernel: signal: max sigframe size: 1776 Jan 28 01:24:34.540560 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:24:34.540572 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:24:34.540586 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:24:34.540686 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:24:34.540698 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:24:34.540712 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:24:34.540723 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:24:34.540734 kernel: smpboot: Max logical packages: 1 Jan 28 01:24:34.540747 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:24:34.540759 kernel: devtmpfs: initialized Jan 28 01:24:34.540771 kernel: x86/mm: Memory block size: 128MB Jan 28 01:24:34.540786 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 01:24:34.540796 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 01:24:34.540807 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 28 01:24:34.540817 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 01:24:34.540827 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 01:24:34.540837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:24:34.540847 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:24:34.540858 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:24:34.540868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:24:34.540882 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:24:34.540892 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:24:34.540903 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:24:34.540913 kernel: audit: type=2000 audit(1769563460.405:1): state=initialized audit_enabled=0 res=1 Jan 28 01:24:34.540923 kernel: cpuidle: using governor menu Jan 28 01:24:34.540933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:24:34.540943 kernel: dca service started, version 1.12.1 Jan 28 01:24:34.540954 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 01:24:34.540967 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:24:34.540982 kernel: PCI: Using configuration type 1 for base access Jan 28 01:24:34.540993 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:24:34.541003 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:24:34.541013 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:24:34.541024 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:24:34.541034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:24:34.541043 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:24:34.541053 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:24:34.541063 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:24:34.541078 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:24:34.541088 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 01:24:34.541098 kernel: ACPI: Interpreter enabled Jan 28 01:24:34.541108 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:24:34.541196 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:24:34.541209 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:24:34.541224 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:24:34.541234 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:24:34.541244 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:24:34.545346 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:24:34.545570 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:24:34.545999 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:24:34.546021 kernel: PCI host bridge to bus 0000:00 Jan 28 01:24:34.550943 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:24:34.551202 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:24:34.551391 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:24:34.551560 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:24:34.553327 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:24:34.553504 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 28 01:24:34.553772 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:24:34.553988 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 01:24:34.558076 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 01:24:34.558346 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 28 01:24:34.558567 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 28 01:24:34.560029 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 28 01:24:34.566475 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 28 01:24:34.566942 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:24:34.567277 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 01:24:34.567496 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 28 01:24:34.567815 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 28 01:24:34.568024 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 28 01:24:34.571717 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 01:24:34.571918 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 28 01:24:34.572108 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 28 01:24:34.572376 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 28 01:24:34.572789 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 01:24:34.573000 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 28 01:24:34.573274 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 28 01:24:34.573484 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 28 01:24:34.578035 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 28 01:24:34.578321 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 01:24:34.578576 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:24:34.578989 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 01:24:34.579244 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 28 01:24:34.579430 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 28 01:24:34.579781 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 01:24:34.579969 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 28 01:24:34.579988 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:24:34.580000 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:24:34.580018 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:24:34.580029 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:24:34.580040 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:24:34.580050 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:24:34.580061 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:24:34.580073 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:24:34.580084 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:24:34.580096 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:24:34.580106 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:24:34.583267 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:24:34.583283 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:24:34.583294 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:24:34.583305 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:24:34.583315 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:24:34.583325 kernel: iommu: Default domain type: Translated Jan 28 01:24:34.583336 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:24:34.583346 kernel: efivars: Registered efivars operations Jan 28 01:24:34.583355 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:24:34.583365 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:24:34.583384 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 01:24:34.583394 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 28 01:24:34.583404 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 28 01:24:34.583414 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 28 01:24:34.583698 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:24:34.583881 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:24:34.584053 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:24:34.584068 kernel: vgaarb: loaded Jan 28 01:24:34.584080 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:24:34.584194 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:24:34.584206 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:24:34.584218 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:24:34.584229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:24:34.584240 kernel: pnp: PnP ACPI init Jan 28 01:24:34.584433 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:24:34.584451 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:24:34.584463 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:24:34.584481 kernel: NET: Registered PF_INET protocol family Jan 28 01:24:34.584492 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:24:34.584504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:24:34.584515 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:24:34.584527 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:24:34.584538 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:24:34.584549 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:24:34.584561 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:24:34.584572 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:24:34.584588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:24:34.584687 kernel: NET: Registered PF_XDP protocol family Jan 28 01:24:34.584862 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 28 01:24:34.585035 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 28 01:24:34.588386 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:24:34.588554 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:24:34.588799 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:24:34.589011 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:24:34.589237 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:24:34.589384 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 28 01:24:34.589399 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:24:34.589409 kernel: Initialise system trusted keyrings Jan 28 01:24:34.589419 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:24:34.589430 kernel: Key type asymmetric registered Jan 28 01:24:34.589441 kernel: Asymmetric key parser 'x509' registered Jan 28 01:24:34.589450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 01:24:34.589466 kernel: io scheduler mq-deadline registered Jan 28 01:24:34.589477 kernel: io scheduler kyber registered Jan 28 01:24:34.589488 kernel: io scheduler bfq registered Jan 28 01:24:34.589498 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:24:34.589509 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:24:34.589519 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:24:34.589529 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:24:34.589541 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:24:34.589553 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:24:34.589564 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:24:34.589581 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:24:34.589685 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:24:34.590023 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:24:34.590043 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:24:34.590289 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:24:34.590464 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:24:32 UTC (1769563472) Jan 28 01:24:34.590756 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:24:34.590787 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:24:34.590799 kernel: efifb: probing for efifb Jan 28 01:24:34.590812 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 28 01:24:34.590824 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 28 01:24:34.590835 kernel: efifb: scrolling: redraw Jan 28 01:24:34.590846 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 28 01:24:34.590857 kernel: Console: switching to colour frame buffer device 100x37 Jan 28 01:24:34.590868 kernel: fb0: EFI VGA frame buffer device Jan 28 01:24:34.590880 kernel: pstore: Using crash dump compression: deflate Jan 28 01:24:34.590895 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:24:34.590906 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:24:34.590916 kernel: Segment Routing with IPv6 Jan 28 01:24:34.590927 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:24:34.590940 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:24:34.590952 kernel: Key type dns_resolver registered Jan 28 01:24:34.590965 kernel: IPI shorthand broadcast: enabled Jan 28 01:24:34.591006 kernel: sched_clock: Marking stable (9703180684, 1443914472)->(13156928041, -2009832885) Jan 28 01:24:34.591024 kernel: registered taskstats version 1 Jan 28 01:24:34.591036 kernel: Loading compiled-in X.509 certificates Jan 28 01:24:34.591051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 01:24:34.591063 kernel: Key type .fscrypt registered Jan 28 01:24:34.591074 kernel: Key type fscrypt-provisioning registered Jan 28 01:24:34.591086 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:24:34.591097 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:24:34.591108 kernel: ima: No architecture policies found Jan 28 01:24:34.594329 kernel: clk: Disabling unused clocks Jan 28 01:24:34.594347 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 01:24:34.594366 kernel: Write protecting the kernel read-only data: 36864k Jan 28 01:24:34.594377 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 01:24:34.594388 kernel: Run /init as init process Jan 28 01:24:34.594399 kernel: with arguments: Jan 28 01:24:34.594411 kernel: /init Jan 28 01:24:34.594423 kernel: with environment: Jan 28 01:24:34.594433 kernel: HOME=/ Jan 28 01:24:34.594444 kernel: TERM=linux Jan 28 01:24:34.594458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:24:34.594476 systemd[1]: Detected virtualization kvm. Jan 28 01:24:34.594487 systemd[1]: Detected architecture x86-64. Jan 28 01:24:34.594498 systemd[1]: Running in initrd. Jan 28 01:24:34.594508 systemd[1]: No hostname configured, using default hostname. Jan 28 01:24:34.594519 systemd[1]: Hostname set to . Jan 28 01:24:34.594530 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:24:34.594541 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:24:34.594556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:24:34.594567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:24:34.594580 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:24:34.596026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:24:34.596047 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:24:34.596068 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:24:34.596081 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:24:34.596093 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:24:34.596104 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:24:34.596165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:24:34.596179 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:24:34.596191 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:24:34.596207 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:24:34.596219 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:24:34.596230 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:24:34.596242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:24:34.596253 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:24:34.596265 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:24:34.596276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:24:34.596287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:24:34.596301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:24:34.596313 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:24:34.596324 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:24:34.596335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:24:34.596346 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:24:34.596357 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:24:34.596368 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:24:34.596378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:24:34.596427 systemd-journald[194]: Collecting audit messages is disabled. Jan 28 01:24:34.596460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:24:34.596472 systemd-journald[194]: Journal started Jan 28 01:24:34.596498 systemd-journald[194]: Runtime Journal (/run/log/journal/484df9b58f8d4e2887a52a1519ee36b1) is 6.0M, max 48.3M, 42.2M free. Jan 28 01:24:34.613727 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:24:34.638375 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:24:34.677030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:24:34.680411 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:24:34.803768 systemd-modules-load[195]: Inserted module 'overlay' Jan 28 01:24:34.810371 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:24:34.882384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:24:34.922554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:34.973762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:24:35.058004 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:24:35.086887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:24:35.119444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:24:35.168984 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:24:35.220057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:24:35.286051 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:24:35.364853 dracut-cmdline[226]: dracut-dracut-053 Jan 28 01:24:35.386217 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:24:35.624815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:24:35.672309 kernel: Bridge firewalling registered Jan 28 01:24:35.675721 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 28 01:24:35.694516 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:24:35.875786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:24:36.017207 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:24:36.129515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:24:36.231407 kernel: SCSI subsystem initialized Jan 28 01:24:36.269470 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:24:36.285421 systemd-resolved[310]: Positive Trust Anchors: Jan 28 01:24:36.285484 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:24:36.285528 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:24:36.296558 systemd-resolved[310]: Defaulting to hostname 'linux'. Jan 28 01:24:36.300341 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:24:36.476971 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:24:36.568569 kernel: iscsi: registered transport (tcp) Jan 28 01:24:36.648568 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:24:36.651430 kernel: QLogic iSCSI HBA Driver Jan 28 01:24:36.844482 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:24:36.882962 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:24:37.035070 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:24:37.038466 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:24:37.057236 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:24:37.221016 kernel: raid6: avx2x4 gen() 20540 MB/s Jan 28 01:24:37.241928 kernel: raid6: avx2x2 gen() 16353 MB/s Jan 28 01:24:37.266251 kernel: raid6: avx2x1 gen() 5935 MB/s Jan 28 01:24:37.266338 kernel: raid6: using algorithm avx2x4 gen() 20540 MB/s Jan 28 01:24:37.294521 kernel: raid6: .... xor() 2723 MB/s, rmw enabled Jan 28 01:24:37.294664 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:24:37.360957 kernel: xor: automatically using best checksumming function avx Jan 28 01:24:38.013870 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:24:38.076538 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:24:38.125401 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:24:38.174360 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 28 01:24:38.203794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:24:38.243405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:24:38.287346 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 28 01:24:38.391238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:24:38.440112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:24:38.736131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:24:38.796013 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:24:38.990520 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:24:39.015588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:24:39.042989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:24:39.076576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:24:39.203190 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:24:39.233020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:24:39.233320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:24:39.272427 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:24:39.380929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:24:39.417562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:39.442816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:24:39.458310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:24:39.506872 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:24:39.634273 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:24:39.634584 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 01:24:39.658571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:39.700291 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:24:39.700685 kernel: GPT:9289727 != 19775487 Jan 28 01:24:39.700705 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:24:39.700719 kernel: GPT:9289727 != 19775487 Jan 28 01:24:39.700732 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:24:39.700747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:24:39.908508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:24:40.031464 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:24:40.116193 kernel: libata version 3.00 loaded. Jan 28 01:24:40.116254 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:24:40.298442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:24:40.406250 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (480) Jan 28 01:24:40.426284 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:24:40.452737 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (487) Jan 28 01:24:40.454467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:24:40.498350 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 01:24:40.515435 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:24:40.638504 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:24:40.638971 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:24:40.639042 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 01:24:40.640985 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:24:40.740895 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 01:24:40.741381 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:24:40.741690 kernel: scsi host0: ahci Jan 28 01:24:40.742486 kernel: scsi host1: ahci Jan 28 01:24:40.742813 kernel: scsi host2: ahci Jan 28 01:24:40.745870 kernel: scsi host3: ahci Jan 28 01:24:40.746087 kernel: scsi host4: ahci Jan 28 01:24:40.746439 kernel: scsi host5: ahci Jan 28 01:24:40.747099 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 28 01:24:40.747117 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 28 01:24:40.747131 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:24:40.750576 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 28 01:24:40.750929 disk-uuid[538]: Primary Header is updated. Jan 28 01:24:40.750929 disk-uuid[538]: Secondary Entries is updated. Jan 28 01:24:40.750929 disk-uuid[538]: Secondary Header is updated. Jan 28 01:24:40.805435 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 28 01:24:40.805481 kernel: AES CTR mode by8 optimization enabled Jan 28 01:24:40.805500 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 28 01:24:40.805516 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 28 01:24:40.820757 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:24:41.104189 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:24:41.104257 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:24:41.123105 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:24:41.123214 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:24:41.130678 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:24:41.154810 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:24:41.154921 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:24:41.154941 kernel: ata3.00: applying bridge limits Jan 28 01:24:41.166780 kernel: ata3.00: configured for UDMA/100 Jan 28 01:24:41.178900 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:24:41.358142 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:24:41.358710 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:24:41.385384 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:24:41.825268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:24:41.831573 disk-uuid[539]: The operation has completed successfully. Jan 28 01:24:42.140110 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:24:42.164452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:24:42.235506 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:24:42.297532 sh[604]: Success Jan 28 01:24:42.503121 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 01:24:42.717231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:24:42.776891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:24:42.787469 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:24:42.864303 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 01:24:42.864390 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:24:42.864411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:24:42.878481 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:24:42.889579 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:24:42.960805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:24:43.002108 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:24:43.049024 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:24:43.063064 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:24:43.128309 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:24:43.128382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:24:43.128402 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:24:43.190518 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:24:43.283135 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:24:43.327130 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:24:43.398682 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:24:43.493145 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:24:43.767904 ignition[700]: Ignition 2.19.0 Jan 28 01:24:43.767919 ignition[700]: Stage: fetch-offline Jan 28 01:24:43.767967 ignition[700]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:24:43.767980 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:24:43.768086 ignition[700]: parsed url from cmdline: "" Jan 28 01:24:43.768093 ignition[700]: no config URL provided Jan 28 01:24:43.768102 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:24:43.768115 ignition[700]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:24:43.768214 ignition[700]: op(1): [started] loading QEMU firmware config module Jan 28 01:24:43.768226 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:24:43.810806 ignition[700]: op(1): [finished] loading QEMU firmware config module Jan 28 01:24:43.810843 ignition[700]: QEMU firmware config was not found. Ignoring... Jan 28 01:24:43.930848 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:24:43.996546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:24:44.067959 systemd-networkd[792]: lo: Link UP Jan 28 01:24:44.067996 systemd-networkd[792]: lo: Gained carrier Jan 28 01:24:44.071879 systemd-networkd[792]: Enumeration completed Jan 28 01:24:44.073339 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:24:44.075855 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:24:44.075861 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:24:44.080531 systemd-networkd[792]: eth0: Link UP Jan 28 01:24:44.080537 systemd-networkd[792]: eth0: Gained carrier Jan 28 01:24:44.080548 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:24:44.093361 systemd[1]: Reached target network.target - Network. Jan 28 01:24:44.153909 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:24:44.487535 ignition[700]: parsing config with SHA512: 2c03a230933f8d36f43996efd19071bc65e91b2f5035da862e61663f2a8a2073a60f0c6c661d273a7e6966b8bc1ffc195c8382b3cc3314fea5aae4fa92aae9ea Jan 28 01:24:44.527872 unknown[700]: fetched base config from "system" Jan 28 01:24:44.527893 unknown[700]: fetched user config from "qemu" Jan 28 01:24:44.538919 ignition[700]: fetch-offline: fetch-offline passed Jan 28 01:24:44.539017 ignition[700]: Ignition finished successfully Jan 28 01:24:44.582882 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:24:44.680295 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:24:44.739872 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:24:44.843034 ignition[796]: Ignition 2.19.0 Jan 28 01:24:44.843067 ignition[796]: Stage: kargs Jan 28 01:24:44.843439 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:24:44.843459 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:24:44.846415 ignition[796]: kargs: kargs passed Jan 28 01:24:44.846489 ignition[796]: Ignition finished successfully Jan 28 01:24:44.970354 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:24:45.089252 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:24:45.165333 ignition[803]: Ignition 2.19.0 Jan 28 01:24:45.165387 ignition[803]: Stage: disks Jan 28 01:24:45.182714 kernel: hrtimer: interrupt took 2425598 ns Jan 28 01:24:45.165712 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:24:45.165733 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:24:45.187733 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:24:45.177864 ignition[803]: disks: disks passed Jan 28 01:24:45.194033 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:24:45.177963 ignition[803]: Ignition finished successfully Jan 28 01:24:45.208120 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:24:45.241107 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:24:45.279926 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:24:45.301275 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:24:45.517555 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:24:45.618058 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 01:24:45.680122 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:24:45.724565 systemd-networkd[792]: eth0: Gained IPv6LL Jan 28 01:24:45.727883 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:24:46.475051 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 01:24:46.484037 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:24:46.495324 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:24:46.558066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:24:46.630471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:24:46.656393 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:24:46.756675 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Jan 28 01:24:46.756732 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:24:46.756749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:24:46.756763 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:24:46.656466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:24:46.656507 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:24:46.685150 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:24:46.820077 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:24:46.832213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:24:46.881268 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:24:47.097929 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:24:47.180685 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:24:47.238331 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:24:47.297036 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:24:47.905802 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:24:47.970066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:24:47.998524 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:24:48.198563 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:24:48.212007 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:24:48.434493 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:24:48.518950 ignition[935]: INFO : Ignition 2.19.0 Jan 28 01:24:48.518950 ignition[935]: INFO : Stage: mount Jan 28 01:24:48.542062 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:24:48.542062 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:24:48.542062 ignition[935]: INFO : mount: mount passed Jan 28 01:24:48.542062 ignition[935]: INFO : Ignition finished successfully Jan 28 01:24:48.576503 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:24:48.658076 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:24:48.717156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:24:48.774222 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Jan 28 01:24:48.796735 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:24:48.796951 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:24:48.816958 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:24:48.910072 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:24:48.929330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:24:49.083008 ignition[965]: INFO : Ignition 2.19.0 Jan 28 01:24:49.083008 ignition[965]: INFO : Stage: files Jan 28 01:24:49.107340 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:24:49.107340 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:24:49.107340 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:24:49.107340 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:24:49.107340 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:24:49.202123 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:24:49.202123 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:24:49.202123 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:24:49.202123 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:24:49.202123 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:24:49.202123 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:24:49.202123 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 01:24:49.121739 unknown[965]: wrote ssh authorized keys file for user: core Jan 28 01:24:49.397286 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:24:50.128171 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:24:50.320888 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:24:50.320888 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:24:50.320888 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:24:50.320888 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:24:50.320888 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:24:50.320888 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:24:50.676338 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 01:24:53.878954 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:24:53.905087 ignition[965]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 28 01:24:53.924692 ignition[965]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:24:55.536893 ignition[965]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:24:55.584259 ignition[965]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:24:55.584259 ignition[965]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:24:55.584259 ignition[965]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:24:55.584259 ignition[965]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:24:55.584259 ignition[965]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:24:55.584259 ignition[965]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:24:55.584259 ignition[965]: INFO : files: files passed Jan 28 01:24:55.584259 ignition[965]: INFO : Ignition finished successfully Jan 28 01:24:55.592198 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:24:55.700722 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:24:55.747048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:24:55.773414 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:24:55.773738 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:24:55.896404 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:24:55.916002 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:24:55.934111 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:24:55.934111 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:24:55.982966 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:24:55.996815 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:24:56.072269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:24:56.383323 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:24:56.383578 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:24:56.426729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:24:56.437983 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:24:56.484802 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:24:56.518939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:24:56.575111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:24:56.604125 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:24:56.635891 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:24:56.665812 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:24:56.686813 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:24:56.701305 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:24:56.706106 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:24:56.749445 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:24:56.756519 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:24:56.765831 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:24:56.812045 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:24:56.839960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:24:56.866501 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:24:56.902555 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:24:56.956394 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:24:56.969719 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:24:56.990944 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:24:57.008106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:24:57.010676 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:24:57.112876 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:24:57.122831 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:24:57.164191 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:24:57.168056 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:24:57.192517 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:24:57.192827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:24:57.226569 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:24:57.227031 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:24:57.294189 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:24:57.366170 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:24:57.371793 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:24:57.401717 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:24:57.411516 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:24:57.421106 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:24:57.421320 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:24:57.435464 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:24:57.435720 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:24:57.450197 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:24:57.465816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:24:57.614933 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:24:57.615112 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:24:57.704337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:24:57.754078 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:24:57.801160 ignition[1019]: INFO : Ignition 2.19.0 Jan 28 01:24:57.801160 ignition[1019]: INFO : Stage: umount Jan 28 01:24:57.801160 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:24:57.801160 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:24:57.754466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:24:57.872064 ignition[1019]: INFO : umount: umount passed Jan 28 01:24:57.872064 ignition[1019]: INFO : Ignition finished successfully Jan 28 01:24:57.777085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:24:57.812030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:24:57.812324 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:24:57.831283 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:24:57.831465 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:24:57.872565 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:24:57.872896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:24:57.884825 systemd[1]: Stopped target network.target - Network. Jan 28 01:24:57.888400 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:24:57.888511 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:24:57.899092 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:24:57.899407 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:24:57.905823 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:24:57.905907 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:24:57.922040 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:24:57.922157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:24:57.923112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:24:57.923947 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:24:57.932047 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:24:57.932499 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:24:58.163542 systemd-networkd[792]: eth0: DHCPv6 lease lost Jan 28 01:24:58.168558 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:24:58.169082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:24:58.183177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:24:58.185293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:24:58.216949 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:24:58.217170 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:24:58.267140 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:24:58.267893 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:24:58.340314 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:24:58.365986 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:24:58.366100 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:24:58.366297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:24:58.366359 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:24:58.373182 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:24:58.373320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:24:58.380428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:24:58.384504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:24:58.385865 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:24:58.386548 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:24:58.393588 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:24:58.393887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:24:58.419382 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:24:58.419690 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:24:58.433905 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:24:58.434267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:24:58.463351 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:24:58.463502 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:24:58.472011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:24:58.473879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:24:58.488081 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:24:58.488191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:24:58.498927 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:24:58.499014 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:24:58.525268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:24:58.525369 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:24:58.606190 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:24:58.632853 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:24:58.632996 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:24:58.657167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:24:58.657358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:24:58.683113 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:24:58.683428 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:24:58.819116 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:24:58.875529 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:24:58.900027 systemd[1]: Switching root. Jan 28 01:24:58.972951 systemd-journald[194]: Journal stopped Jan 28 01:25:05.325856 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 28 01:25:05.325950 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:25:05.325978 kernel: SELinux: policy capability open_perms=1 Jan 28 01:25:05.325993 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:25:05.326022 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:25:05.326038 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:25:05.326056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:25:05.326079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:25:05.326094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:25:05.326109 kernel: audit: type=1403 audit(1769563499.878:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:25:05.326125 systemd[1]: Successfully loaded SELinux policy in 192.672ms. Jan 28 01:25:05.326228 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 46.331ms. Jan 28 01:25:05.326304 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:25:05.326323 systemd[1]: Detected virtualization kvm. Jan 28 01:25:05.326338 systemd[1]: Detected architecture x86-64. Jan 28 01:25:05.326355 systemd[1]: Detected first boot. Jan 28 01:25:05.326372 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:25:05.326387 zram_generator::config[1135]: No configuration found. Jan 28 01:25:05.326412 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:25:05.326428 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:25:05.326503 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:25:05.326521 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:25:05.326537 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:25:05.326554 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:25:05.326570 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:25:05.326585 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:25:05.326672 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:25:05.326695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:25:05.326711 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:25:05.326727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:25:05.326744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:25:05.326761 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:25:05.326778 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:25:05.326794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:25:05.326865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:25:05.326887 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:25:05.326909 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:25:05.326995 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:25:05.327016 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:25:05.327035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:25:05.327051 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:25:05.327067 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:25:05.327084 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:25:05.327105 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:25:05.327121 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:25:05.327137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:25:05.327154 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:25:05.327169 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:25:05.327294 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:25:05.327315 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:25:05.327331 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:25:05.327347 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:25:05.327420 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:25:05.327444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:05.327516 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:25:05.327537 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:25:05.327554 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:25:05.327570 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:25:05.327586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:05.327704 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:25:05.327726 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:25:05.327751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:05.327770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:25:05.327787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:05.327805 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:25:05.327823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:05.327841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:25:05.327861 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 28 01:25:05.327877 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 28 01:25:05.327899 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:25:05.327914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:25:05.327930 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:25:05.327949 kernel: fuse: init (API version 7.39) Jan 28 01:25:05.327969 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:25:05.327985 kernel: loop: module loaded Jan 28 01:25:05.328000 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:25:05.328046 systemd-journald[1193]: Collecting audit messages is disabled. Jan 28 01:25:05.328088 systemd-journald[1193]: Journal started Jan 28 01:25:05.328116 systemd-journald[1193]: Runtime Journal (/run/log/journal/484df9b58f8d4e2887a52a1519ee36b1) is 6.0M, max 48.3M, 42.2M free. Jan 28 01:25:05.360769 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:05.371040 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:25:05.382773 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:25:05.391164 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:25:05.401004 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:25:05.407969 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:25:05.442397 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:25:05.455750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:25:05.463198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:25:05.477369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:25:05.485414 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:25:05.485861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:25:05.493439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:05.493912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:05.501346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:05.501890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:05.518936 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:25:05.519994 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:25:05.545097 kernel: ACPI: bus type drm_connector registered Jan 28 01:25:05.555982 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:05.556697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:05.584853 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:25:05.585715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:25:05.594500 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:25:05.614322 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:25:05.639197 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:25:05.711884 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:25:05.753516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:25:05.774501 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:25:05.791958 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:25:05.823897 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:25:05.916911 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:25:05.929733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:25:05.941796 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:25:05.977342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:25:06.032418 systemd-journald[1193]: Time spent on flushing to /var/log/journal/484df9b58f8d4e2887a52a1519ee36b1 is 81.667ms for 969 entries. Jan 28 01:25:06.032418 systemd-journald[1193]: System Journal (/var/log/journal/484df9b58f8d4e2887a52a1519ee36b1) is 8.0M, max 195.6M, 187.6M free. Jan 28 01:25:06.260496 systemd-journald[1193]: Received client request to flush runtime journal. Jan 28 01:25:06.032880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:25:06.073959 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:25:06.142689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:25:06.176334 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:25:06.191837 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:25:06.212132 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:25:06.249745 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:25:06.271980 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:25:06.296567 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:25:06.339388 udevadm[1247]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 28 01:25:06.361919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:25:06.395210 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 28 01:25:06.395234 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 28 01:25:06.411302 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:25:06.443972 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:25:06.572045 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:25:06.607951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:25:06.760044 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 28 01:25:06.760069 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 28 01:25:06.788792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:25:07.993892 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:25:08.043193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:25:08.282190 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Jan 28 01:25:08.460320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:25:08.559950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:25:08.773403 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:25:08.943113 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 28 01:25:09.469379 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1281) Jan 28 01:25:09.569698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 01:25:09.578344 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:25:09.635693 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:25:09.753874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:25:10.187883 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 01:25:10.195385 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:25:10.195880 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 01:25:10.237725 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:25:10.244422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:10.310953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:10.311449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:10.327007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:10.591963 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 28 01:25:10.661710 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:25:10.828224 systemd-networkd[1276]: lo: Link UP Jan 28 01:25:10.853741 systemd-networkd[1276]: lo: Gained carrier Jan 28 01:25:10.899507 systemd-networkd[1276]: Enumeration completed Jan 28 01:25:10.923453 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:25:10.932908 systemd-networkd[1276]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:10.960740 systemd-networkd[1276]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:25:10.985830 systemd-networkd[1276]: eth0: Link UP Jan 28 01:25:10.985847 systemd-networkd[1276]: eth0: Gained carrier Jan 28 01:25:10.985881 systemd-networkd[1276]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:11.002012 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:25:11.063796 systemd-networkd[1276]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:25:11.747402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:12.207992 systemd-networkd[1276]: eth0: Gained IPv6LL Jan 28 01:25:12.229894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:25:12.954261 kernel: kvm_amd: TSC scaling supported Jan 28 01:25:12.958241 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:25:12.959379 kernel: kvm_amd: Nested Paging enabled Jan 28 01:25:12.989182 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:25:13.062941 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:25:14.793875 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:25:15.131922 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:25:15.574722 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:25:15.809009 lvm[1318]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:25:16.444128 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:25:16.491040 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:25:16.538764 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:25:16.805492 lvm[1321]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:25:17.175429 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:25:17.277925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:25:17.339988 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:25:17.341380 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:25:17.394285 systemd[1]: Reached target machines.target - Containers. Jan 28 01:25:17.496029 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:25:17.560185 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:25:17.608847 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:25:17.636760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:17.667740 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:25:17.712842 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:25:17.811288 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:25:17.844995 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:25:17.919437 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:25:18.050737 kernel: loop0: detected capacity change from 0 to 142488 Jan 28 01:25:18.188497 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:25:18.193745 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:25:18.387345 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:25:18.496872 kernel: loop1: detected capacity change from 0 to 140768 Jan 28 01:25:18.770960 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 01:25:19.288401 kernel: loop3: detected capacity change from 0 to 142488 Jan 28 01:25:19.715510 kernel: loop4: detected capacity change from 0 to 140768 Jan 28 01:25:20.000122 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 01:25:20.197916 (sd-merge)[1344]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 01:25:20.200162 (sd-merge)[1344]: Merged extensions into '/usr'. Jan 28 01:25:20.275042 systemd[1]: Reloading requested from client PID 1329 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:25:20.278514 systemd[1]: Reloading... Jan 28 01:25:20.678844 zram_generator::config[1371]: No configuration found. Jan 28 01:25:21.819709 ldconfig[1325]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:25:22.078741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:22.515506 systemd[1]: Reloading finished in 2235 ms. Jan 28 01:25:22.556845 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:25:22.570494 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:25:22.597106 systemd[1]: Starting ensure-sysext.service... Jan 28 01:25:22.617076 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:25:22.631906 systemd[1]: Reloading requested from client PID 1415 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:25:22.635020 systemd[1]: Reloading... Jan 28 01:25:23.003474 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:25:23.004479 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:25:23.013790 systemd-tmpfiles[1416]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:25:23.014456 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Jan 28 01:25:23.014923 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Jan 28 01:25:23.041172 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:25:23.041827 systemd-tmpfiles[1416]: Skipping /boot Jan 28 01:25:23.093797 zram_generator::config[1449]: No configuration found. Jan 28 01:25:23.116981 systemd-tmpfiles[1416]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:25:23.117047 systemd-tmpfiles[1416]: Skipping /boot Jan 28 01:25:23.714247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:23.933906 systemd[1]: Reloading finished in 1297 ms. Jan 28 01:25:23.990383 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:25:24.068057 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:25:24.102750 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:25:24.175547 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:25:24.257064 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:25:24.311964 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:25:24.349314 augenrules[1508]: No rules Jan 28 01:25:24.367157 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:25:24.397544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:24.398481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:24.441416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:24.461998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:24.651961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:24.674790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:24.684452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:24.690483 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:25:24.766435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:24.767240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:24.805547 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:25:24.829202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:24.829911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:24.874762 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:24.875265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:24.983821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:24.984126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:25.020113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:25.059074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:25.111250 systemd-resolved[1501]: Positive Trust Anchors: Jan 28 01:25:25.111274 systemd-resolved[1501]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:25:25.111389 systemd-resolved[1501]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:25:25.114790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:25.140704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:25.171765 systemd-resolved[1501]: Defaulting to hostname 'linux'. Jan 28 01:25:25.173816 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:25:25.200868 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:25:25.201933 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:25.258927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:25:25.350763 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:25:25.398522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:25.399010 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:25.427937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:25.431092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:25.483963 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:25.492086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:25.541499 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:25:25.717901 systemd[1]: Reached target network.target - Network. Jan 28 01:25:25.760860 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:25:25.786110 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:25:25.816306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:25.816825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:25.985425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:26.089757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:25:26.156069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:26.172523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:26.180427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:26.180755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:25:26.180883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:25:26.185024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:26.185530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:26.195878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:26.196410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:26.206941 systemd[1]: Finished ensure-sysext.service. Jan 28 01:25:26.262969 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:25:26.266223 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:25:26.324903 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:26.328525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:26.367267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:25:26.367853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:25:26.413914 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:25:26.961930 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:25:26.981444 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:25:26.982577 systemd-timesyncd[1560]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:25:27.007495 systemd-timesyncd[1560]: Initial clock synchronization to Wed 2026-01-28 01:25:27.271660 UTC. Jan 28 01:25:27.009953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:25:27.024732 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:25:27.039454 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:25:27.063289 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:25:27.065189 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:25:27.074496 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:25:27.087255 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:25:27.126124 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:25:27.172937 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:25:27.211474 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:25:27.324065 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:25:27.346532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:25:27.374772 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:25:27.395392 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:25:27.415249 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:25:27.435247 systemd[1]: System is tainted: cgroupsv1 Jan 28 01:25:27.435386 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:25:27.435431 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:25:27.449157 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:25:27.489052 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:25:27.526791 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:25:27.551858 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:25:27.614024 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:25:27.635093 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:25:27.637577 jq[1568]: false Jan 28 01:25:27.660893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:27.704040 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:25:27.736086 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:25:27.802224 extend-filesystems[1570]: Found loop3 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found loop4 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found loop5 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found sr0 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda1 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda2 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda3 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found usr Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda4 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda6 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda7 Jan 28 01:25:27.812759 extend-filesystems[1570]: Found vda9 Jan 28 01:25:27.812759 extend-filesystems[1570]: Checking size of /dev/vda9 Jan 28 01:25:27.984895 dbus-daemon[1567]: [system] SELinux support is enabled Jan 28 01:25:28.052221 extend-filesystems[1570]: Resized partition /dev/vda9 Jan 28 01:25:27.874525 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:25:28.057867 extend-filesystems[1588]: resize2fs 1.47.1 (20-May-2024) Jan 28 01:25:28.110129 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 01:25:28.154358 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:25:28.177550 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:25:28.261382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1598) Jan 28 01:25:28.426791 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 01:25:28.478260 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:25:28.478260 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:25:28.478260 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 01:25:28.506036 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Jan 28 01:25:28.641522 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:25:28.668494 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:25:28.687532 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:25:28.729082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:25:28.766253 jq[1614]: true Jan 28 01:25:28.764019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:25:28.781021 update_engine[1613]: I20260128 01:25:28.778293 1613 main.cc:92] Flatcar Update Engine starting Jan 28 01:25:28.781021 update_engine[1613]: I20260128 01:25:28.780735 1613 update_check_scheduler.cc:74] Next update check in 9m4s Jan 28 01:25:28.805107 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:25:28.805555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:25:28.811129 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:25:28.811608 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:25:28.832320 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:25:28.833039 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:25:28.864465 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:25:28.961492 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:25:28.964257 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:25:29.029053 systemd-logind[1612]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 01:25:29.029150 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:25:29.042367 systemd-logind[1612]: New seat seat0. Jan 28 01:25:29.049483 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:25:29.085112 jq[1621]: true Jan 28 01:25:29.266453 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:25:29.307406 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:25:29.308379 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:25:29.380832 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 01:25:29.398343 tar[1620]: linux-amd64/LICENSE Jan 28 01:25:29.403288 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:25:29.405005 tar[1620]: linux-amd64/helm Jan 28 01:25:29.416728 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:25:29.437018 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:25:29.437312 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:25:29.437549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:25:29.453827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:25:29.454340 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:25:29.477969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:25:29.496006 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:25:29.509904 bash[1667]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:25:29.614320 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:25:29.675134 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:25:29.799898 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:25:29.824964 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:25:30.119550 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:25:30.131404 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:25:30.210964 locksmithd[1669]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:25:30.222394 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:25:30.577916 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:25:30.806557 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:25:30.883995 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:25:30.910986 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:25:31.319390 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:25:31.595784 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:37990.service - OpenSSH per-connection server daemon (10.0.0.1:37990). Jan 28 01:25:32.151726 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 37990 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:32.169707 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:32.228458 systemd-logind[1612]: New session 1 of user core. Jan 28 01:25:32.232733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:25:32.286140 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:25:32.611982 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:25:32.762511 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:25:33.031496 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:25:33.699859 containerd[1622]: time="2026-01-28T01:25:33.698588424Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:25:33.843243 containerd[1622]: time="2026-01-28T01:25:33.840728845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.853887 containerd[1622]: time="2026-01-28T01:25:33.851245096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:33.853887 containerd[1622]: time="2026-01-28T01:25:33.853296417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:25:33.853887 containerd[1622]: time="2026-01-28T01:25:33.853328732Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:25:33.853887 containerd[1622]: time="2026-01-28T01:25:33.853773287Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:25:33.853887 containerd[1622]: time="2026-01-28T01:25:33.853798211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.854461 containerd[1622]: time="2026-01-28T01:25:33.854369104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:33.854461 containerd[1622]: time="2026-01-28T01:25:33.854444365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.855086 containerd[1622]: time="2026-01-28T01:25:33.855000802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:33.855086 containerd[1622]: time="2026-01-28T01:25:33.855079849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.858026 containerd[1622]: time="2026-01-28T01:25:33.857253864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:33.858026 containerd[1622]: time="2026-01-28T01:25:33.857310218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.858026 containerd[1622]: time="2026-01-28T01:25:33.857441312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.858309 containerd[1622]: time="2026-01-28T01:25:33.858133383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:33.858472 containerd[1622]: time="2026-01-28T01:25:33.858424772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:33.858472 containerd[1622]: time="2026-01-28T01:25:33.858449807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:25:33.859088 containerd[1622]: time="2026-01-28T01:25:33.858935603Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:25:33.859357 containerd[1622]: time="2026-01-28T01:25:33.859311445Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:25:33.890830 containerd[1622]: time="2026-01-28T01:25:33.890525153Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:25:33.891812 containerd[1622]: time="2026-01-28T01:25:33.890989948Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:25:33.891812 containerd[1622]: time="2026-01-28T01:25:33.891064923Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:25:33.891812 containerd[1622]: time="2026-01-28T01:25:33.891097085Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:25:33.891812 containerd[1622]: time="2026-01-28T01:25:33.891121969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:25:33.891812 containerd[1622]: time="2026-01-28T01:25:33.891360893Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.892682350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893467244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893496291Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893517793Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893591567Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893765708Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893794786Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893820024Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893840712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893931296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893962176Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.893986447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.894026888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.894529 containerd[1622]: time="2026-01-28T01:25:33.894049327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894069232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894089807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894117103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894139798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894158735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894182101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894205324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894228589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894246365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894265964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894329943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894360965Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894442681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894467930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.897249 containerd[1622]: time="2026-01-28T01:25:33.894484393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.894680341Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.894941185Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.894970262Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.894991745Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.895009532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.895029070Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.895046440Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:25:33.899702 containerd[1622]: time="2026-01-28T01:25:33.895061590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:25:33.900004 containerd[1622]: time="2026-01-28T01:25:33.896388208Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:25:33.900004 containerd[1622]: time="2026-01-28T01:25:33.896473049Z" level=info msg="Connect containerd service" Jan 28 01:25:33.900004 containerd[1622]: time="2026-01-28T01:25:33.896556943Z" level=info msg="using legacy CRI server" Jan 28 01:25:33.900004 containerd[1622]: time="2026-01-28T01:25:33.896568784Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:25:33.900004 containerd[1622]: time="2026-01-28T01:25:33.897113053Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:25:33.902295 containerd[1622]: time="2026-01-28T01:25:33.900413290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:25:33.902295 containerd[1622]: time="2026-01-28T01:25:33.901177812Z" level=info msg="Start subscribing containerd event" Jan 28 01:25:33.902370 containerd[1622]: time="2026-01-28T01:25:33.902292895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:25:33.902404 containerd[1622]: time="2026-01-28T01:25:33.902370649Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:25:33.908137 containerd[1622]: time="2026-01-28T01:25:33.908097825Z" level=info msg="Start recovering state" Jan 28 01:25:33.908480 containerd[1622]: time="2026-01-28T01:25:33.908453110Z" level=info msg="Start event monitor" Jan 28 01:25:33.908757 containerd[1622]: time="2026-01-28T01:25:33.908730052Z" level=info msg="Start snapshots syncer" Jan 28 01:25:33.908847 containerd[1622]: time="2026-01-28T01:25:33.908826315Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:25:33.908946 containerd[1622]: time="2026-01-28T01:25:33.908924126Z" level=info msg="Start streaming server" Jan 28 01:25:33.909875 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:25:33.910728 containerd[1622]: time="2026-01-28T01:25:33.910702641Z" level=info msg="containerd successfully booted in 0.868324s" Jan 28 01:25:34.293170 systemd[1702]: Queued start job for default target default.target. Jan 28 01:25:34.294121 systemd[1702]: Created slice app.slice - User Application Slice. Jan 28 01:25:34.294154 systemd[1702]: Reached target paths.target - Paths. Jan 28 01:25:34.294177 systemd[1702]: Reached target timers.target - Timers. Jan 28 01:25:34.333273 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:25:34.388809 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:25:34.389487 systemd[1702]: Reached target sockets.target - Sockets. Jan 28 01:25:34.389511 systemd[1702]: Reached target basic.target - Basic System. Jan 28 01:25:34.389588 systemd[1702]: Reached target default.target - Main User Target. Jan 28 01:25:34.389750 systemd[1702]: Startup finished in 1.027s. Jan 28 01:25:34.390381 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:25:34.968217 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:25:35.372370 tar[1620]: linux-amd64/README.md Jan 28 01:25:35.421822 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:33996.service - OpenSSH per-connection server daemon (10.0.0.1:33996). Jan 28 01:25:35.690766 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:25:35.906526 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 33996 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:35.909084 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:36.321452 systemd-logind[1612]: New session 2 of user core. Jan 28 01:25:36.468061 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:25:36.993460 sshd[1721]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:37.130739 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:34016.service - OpenSSH per-connection server daemon (10.0.0.1:34016). Jan 28 01:25:37.145710 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:33996.service: Deactivated successfully. Jan 28 01:25:37.231518 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 01:25:37.308879 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Jan 28 01:25:37.344198 systemd-logind[1612]: Removed session 2. Jan 28 01:25:37.444496 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 34016 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:37.462565 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:37.597478 systemd-logind[1612]: New session 3 of user core. Jan 28 01:25:37.618358 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:25:37.786132 sshd[1735]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:37.808805 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:34016.service: Deactivated successfully. Jan 28 01:25:37.827444 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:25:37.827718 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:25:37.832201 systemd-logind[1612]: Removed session 3. Jan 28 01:25:38.659038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:38.661097 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:25:38.662104 systemd[1]: Startup finished in 36.679s (kernel) + 38.948s (userspace) = 1min 15.628s. Jan 28 01:25:38.663485 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:25:48.261291 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:57758.service - OpenSSH per-connection server daemon (10.0.0.1:57758). Jan 28 01:25:48.607884 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 57758 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:48.622876 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:48.671677 systemd-logind[1612]: New session 4 of user core. Jan 28 01:25:48.699989 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:25:49.063036 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:49.107920 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:57762.service - OpenSSH per-connection server daemon (10.0.0.1:57762). Jan 28 01:25:49.122848 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:57758.service: Deactivated successfully. Jan 28 01:25:49.168740 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:25:49.190216 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:25:49.197309 systemd-logind[1612]: Removed session 4. Jan 28 01:25:49.591278 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 57762 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:49.632866 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:49.757904 systemd-logind[1612]: New session 5 of user core. Jan 28 01:25:49.819008 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:25:49.967413 sshd[1766]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:49.991293 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:57762.service: Deactivated successfully. Jan 28 01:25:50.064110 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:25:50.097130 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:25:50.294247 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:57776.service - OpenSSH per-connection server daemon (10.0.0.1:57776). Jan 28 01:25:50.296425 systemd-logind[1612]: Removed session 5. Jan 28 01:25:50.912690 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 57776 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:50.952541 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:50.979169 systemd-logind[1612]: New session 6 of user core. Jan 28 01:25:50.991229 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:25:51.179914 sshd[1777]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:51.185388 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:57784.service - OpenSSH per-connection server daemon (10.0.0.1:57784). Jan 28 01:25:51.196204 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:57776.service: Deactivated successfully. Jan 28 01:25:51.379978 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:25:51.463530 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:25:51.527084 systemd-logind[1612]: Removed session 6. Jan 28 01:25:51.656845 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:51.670220 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:51.683743 kubelet[1753]: E0128 01:25:51.681671 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:25:51.708878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:25:51.709285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:25:51.709840 systemd-logind[1612]: New session 7 of user core. Jan 28 01:25:51.744467 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:25:52.058003 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:25:52.062278 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:52.192430 sudo[1791]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:52.232908 sshd[1782]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:52.282751 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Jan 28 01:25:52.289675 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:57784.service: Deactivated successfully. Jan 28 01:25:52.317358 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:25:52.333701 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:25:52.356666 systemd-logind[1612]: Removed session 7. Jan 28 01:25:52.477934 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:52.482323 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:52.518576 systemd-logind[1612]: New session 8 of user core. Jan 28 01:25:52.537837 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:25:52.709924 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:25:52.710515 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:52.772370 sudo[1801]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:52.817457 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 01:25:52.822256 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:52.945849 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 01:25:52.973441 auditctl[1804]: No rules Jan 28 01:25:52.979194 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:25:52.983009 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 01:25:53.041321 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:25:53.283266 augenrules[1823]: No rules Jan 28 01:25:53.286072 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:25:53.293773 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:53.302341 sshd[1793]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:53.349478 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:59010.service - OpenSSH per-connection server daemon (10.0.0.1:59010). Jan 28 01:25:53.351296 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:57786.service: Deactivated successfully. Jan 28 01:25:53.365124 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:25:53.370830 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:25:53.387909 systemd-logind[1612]: Removed session 8. Jan 28 01:25:53.473520 sshd[1829]: Accepted publickey for core from 10.0.0.1 port 59010 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:53.477437 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:53.504151 systemd-logind[1612]: New session 9 of user core. Jan 28 01:25:53.523449 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:25:53.623400 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:25:53.623974 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:26:00.741266 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:26:01.012471 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:26:01.779736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:26:01.869154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:05.324559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:05.382834 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:26:07.041242 kubelet[1871]: E0128 01:26:07.040573 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:26:07.148894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:26:07.162736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:26:11.907116 dockerd[1854]: time="2026-01-28T01:26:11.906535512Z" level=info msg="Starting up" Jan 28 01:26:14.183214 update_engine[1613]: I20260128 01:26:14.106549 1613 update_attempter.cc:509] Updating boot flags... Jan 28 01:26:14.719935 systemd[1]: var-lib-docker-metacopy\x2dcheck3695668107-merged.mount: Deactivated successfully. Jan 28 01:26:14.769860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1907) Jan 28 01:26:14.890796 dockerd[1854]: time="2026-01-28T01:26:14.890521415Z" level=info msg="Loading containers: start." Jan 28 01:26:15.350281 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1906) Jan 28 01:26:15.536198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1906) Jan 28 01:26:16.453018 kernel: Initializing XFRM netlink socket Jan 28 01:26:17.574175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:26:17.612134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:19.618965 systemd-networkd[1276]: docker0: Link UP Jan 28 01:26:19.958733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:19.994335 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:26:19.998747 dockerd[1854]: time="2026-01-28T01:26:19.998163168Z" level=info msg="Loading containers: done." Jan 28 01:26:20.329729 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck145811707-merged.mount: Deactivated successfully. Jan 28 01:26:20.365378 dockerd[1854]: time="2026-01-28T01:26:20.363274444Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:26:20.365378 dockerd[1854]: time="2026-01-28T01:26:20.363682732Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:26:20.365378 dockerd[1854]: time="2026-01-28T01:26:20.364084745Z" level=info msg="Daemon has completed initialization" Jan 28 01:26:20.492966 kubelet[2002]: E0128 01:26:20.492372 2002 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:26:20.508128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:26:20.508541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:26:20.861927 dockerd[1854]: time="2026-01-28T01:26:20.860998903Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:26:20.885011 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:26:30.797569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:26:30.853172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:33.027695 containerd[1622]: time="2026-01-28T01:26:33.026899950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:26:36.435350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:36.511266 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:26:37.656206 kubelet[2072]: E0128 01:26:37.655134 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:26:37.667738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:26:37.720167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:26:38.760345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342745334.mount: Deactivated successfully. Jan 28 01:26:48.891315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:26:48.934043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:52.398377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:52.409393 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:26:56.772983 kubelet[2151]: E0128 01:26:56.769354 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:26:56.790479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:26:56.791505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:27:03.787863 containerd[1622]: time="2026-01-28T01:27:03.786335459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:03.800806 containerd[1622]: time="2026-01-28T01:27:03.796100969Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 01:27:03.809705 containerd[1622]: time="2026-01-28T01:27:03.807243518Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:03.835389 containerd[1622]: time="2026-01-28T01:27:03.835039702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:03.856897 containerd[1622]: time="2026-01-28T01:27:03.854038393Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 30.826614494s" Jan 28 01:27:03.856897 containerd[1622]: time="2026-01-28T01:27:03.854426978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 01:27:04.289858 containerd[1622]: time="2026-01-28T01:27:04.283025060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:27:07.060369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:27:07.155191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:27:10.681937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:27:11.461173 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:27:15.111829 kubelet[2176]: E0128 01:27:15.109952 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:27:15.160109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:27:15.161092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:27:24.546774 containerd[1622]: time="2026-01-28T01:27:24.534044419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:24.553881 containerd[1622]: time="2026-01-28T01:27:24.551579368Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 01:27:24.559033 containerd[1622]: time="2026-01-28T01:27:24.558851711Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:24.576374 containerd[1622]: time="2026-01-28T01:27:24.574827643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:24.584060 containerd[1622]: time="2026-01-28T01:27:24.580567989Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 20.296962904s" Jan 28 01:27:24.584060 containerd[1622]: time="2026-01-28T01:27:24.580723872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 01:27:24.593724 containerd[1622]: time="2026-01-28T01:27:24.593145224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:27:25.425209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:27:25.454974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:27:27.943957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:27:27.989453 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:27:30.968937 kubelet[2199]: E0128 01:27:30.967175 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:27:30.975060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:27:30.975566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:27:36.425518 containerd[1622]: time="2026-01-28T01:27:36.425172349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:36.448548 containerd[1622]: time="2026-01-28T01:27:36.448290864Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 01:27:36.454359 containerd[1622]: time="2026-01-28T01:27:36.454206936Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:36.472036 containerd[1622]: time="2026-01-28T01:27:36.469015569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:36.472720 containerd[1622]: time="2026-01-28T01:27:36.472545244Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 11.879238135s" Jan 28 01:27:36.472952 containerd[1622]: time="2026-01-28T01:27:36.472773054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 01:27:36.476002 containerd[1622]: time="2026-01-28T01:27:36.475445868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:27:41.125895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:27:41.172294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:27:42.813838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:27:42.824214 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:27:43.905117 kubelet[2228]: E0128 01:27:43.904192 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:27:43.916533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:27:43.917818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:27:45.084236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030085366.mount: Deactivated successfully. Jan 28 01:27:52.876965 containerd[1622]: time="2026-01-28T01:27:52.875335247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:52.884278 containerd[1622]: time="2026-01-28T01:27:52.883892100Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 01:27:52.906686 containerd[1622]: time="2026-01-28T01:27:52.903558776Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:52.929472 containerd[1622]: time="2026-01-28T01:27:52.925696886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:52.929472 containerd[1622]: time="2026-01-28T01:27:52.927044280Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 16.451513371s" Jan 28 01:27:52.929472 containerd[1622]: time="2026-01-28T01:27:52.927086710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 01:27:52.960738 containerd[1622]: time="2026-01-28T01:27:52.960050405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:27:54.152330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:27:54.221097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:27:55.966511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805404095.mount: Deactivated successfully. Jan 28 01:27:58.704367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:27:58.745130 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:28:00.361736 kubelet[2268]: E0128 01:28:00.361252 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:28:00.373126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:28:00.374432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:28:08.490232 containerd[1622]: time="2026-01-28T01:28:08.485718445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:08.495202 containerd[1622]: time="2026-01-28T01:28:08.493371884Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 01:28:08.499493 containerd[1622]: time="2026-01-28T01:28:08.497272880Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:08.503315 containerd[1622]: time="2026-01-28T01:28:08.503256234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:08.513077 containerd[1622]: time="2026-01-28T01:28:08.506455986Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 15.546269593s" Jan 28 01:28:08.513077 containerd[1622]: time="2026-01-28T01:28:08.511530730Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 01:28:08.521412 containerd[1622]: time="2026-01-28T01:28:08.519471919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:28:09.328280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987768364.mount: Deactivated successfully. Jan 28 01:28:09.375978 containerd[1622]: time="2026-01-28T01:28:09.373005679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:09.387190 containerd[1622]: time="2026-01-28T01:28:09.386918009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 01:28:09.404891 containerd[1622]: time="2026-01-28T01:28:09.402690866Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:09.425022 containerd[1622]: time="2026-01-28T01:28:09.422305118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:09.425022 containerd[1622]: time="2026-01-28T01:28:09.423771226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 902.778281ms" Jan 28 01:28:09.425022 containerd[1622]: time="2026-01-28T01:28:09.423870895Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:28:09.427517 containerd[1622]: time="2026-01-28T01:28:09.427130121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:28:10.510429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 28 01:28:10.545571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:28:10.576576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807997785.mount: Deactivated successfully. Jan 28 01:28:11.405808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:28:11.441126 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:28:11.729804 kubelet[2345]: E0128 01:28:11.729253 2345 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:28:11.740345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:28:11.764351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:28:21.807418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 28 01:28:21.941197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:28:26.181747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:28:26.208110 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:28:27.553296 kubelet[2407]: E0128 01:28:27.543878 2407 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:28:27.564004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:28:27.564430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:28:38.301349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 28 01:28:38.469312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:28:43.194376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:28:43.255543 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:28:43.740179 kubelet[2433]: E0128 01:28:43.739975 2433 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:28:43.753931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:28:43.758989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:28:44.193965 containerd[1622]: time="2026-01-28T01:28:44.192840485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:44.202550 containerd[1622]: time="2026-01-28T01:28:44.202459588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 01:28:44.209736 containerd[1622]: time="2026-01-28T01:28:44.208848082Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:44.245479 containerd[1622]: time="2026-01-28T01:28:44.243523664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:28:44.246931 containerd[1622]: time="2026-01-28T01:28:44.246794167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 34.819578905s" Jan 28 01:28:44.246931 containerd[1622]: time="2026-01-28T01:28:44.246885562Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:28:53.786407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 28 01:28:53.870516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:28:55.349985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:28:55.354310 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:28:56.236904 kubelet[2478]: E0128 01:28:56.233229 2478 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:28:56.261357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:28:56.262181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:29:00.449409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:29:00.524733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:29:00.860045 systemd[1]: Reloading requested from client PID 2496 ('systemctl') (unit session-9.scope)... Jan 28 01:29:00.861726 systemd[1]: Reloading... Jan 28 01:29:01.685515 zram_generator::config[2541]: No configuration found. Jan 28 01:29:02.307490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:29:02.534233 systemd[1]: Reloading finished in 1665 ms. Jan 28 01:29:03.074873 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:29:03.076896 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:29:03.081906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:29:03.125032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:29:04.199508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:29:04.274045 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:29:04.798375 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:29:04.798375 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:29:04.798375 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:29:04.798375 kubelet[2593]: I0128 01:29:04.792220 2593 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:29:05.823290 kubelet[2593]: I0128 01:29:05.820388 2593 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:29:05.823290 kubelet[2593]: I0128 01:29:05.821332 2593 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:29:05.839923 kubelet[2593]: I0128 01:29:05.837917 2593 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:29:06.115342 kubelet[2593]: E0128 01:29:06.113837 2593 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:06.124737 kubelet[2593]: I0128 01:29:06.124695 2593 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:29:06.326775 kubelet[2593]: E0128 01:29:06.323500 2593 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:29:06.326775 kubelet[2593]: I0128 01:29:06.323542 2593 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:29:06.387729 kubelet[2593]: I0128 01:29:06.385482 2593 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:29:06.387729 kubelet[2593]: I0128 01:29:06.386834 2593 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:29:06.387729 kubelet[2593]: I0128 01:29:06.386875 2593 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:29:06.387729 kubelet[2593]: I0128 01:29:06.387159 2593 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:29:06.388816 kubelet[2593]: I0128 01:29:06.387181 2593 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:29:06.412925 kubelet[2593]: I0128 01:29:06.409398 2593 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:29:06.925177 kubelet[2593]: I0128 01:29:06.919864 2593 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:29:06.925177 kubelet[2593]: I0128 01:29:06.920763 2593 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:29:06.925177 kubelet[2593]: I0128 01:29:06.921137 2593 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:29:06.925177 kubelet[2593]: I0128 01:29:06.921668 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:29:06.976546 kubelet[2593]: W0128 01:29:06.976035 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:06.976546 kubelet[2593]: E0128 01:29:06.976274 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:06.982393 kubelet[2593]: W0128 01:29:06.978745 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:06.982393 kubelet[2593]: E0128 01:29:06.978789 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:06.994697 kubelet[2593]: I0128 01:29:06.985410 2593 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:29:06.994697 kubelet[2593]: I0128 01:29:06.987879 2593 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:29:06.994697 kubelet[2593]: W0128 01:29:06.988045 2593 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:29:07.007360 kubelet[2593]: I0128 01:29:07.005014 2593 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:29:07.007360 kubelet[2593]: I0128 01:29:07.006752 2593 server.go:1287] "Started kubelet" Jan 28 01:29:07.009318 kubelet[2593]: I0128 01:29:07.007834 2593 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:29:07.019406 kubelet[2593]: I0128 01:29:07.016737 2593 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:29:07.031270 kubelet[2593]: I0128 01:29:07.031191 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:29:07.035366 kubelet[2593]: I0128 01:29:07.032865 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:29:07.036589 kubelet[2593]: I0128 01:29:07.036437 2593 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:29:07.039393 kubelet[2593]: I0128 01:29:07.037386 2593 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:29:07.079155 kubelet[2593]: I0128 01:29:07.077426 2593 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:29:07.079155 kubelet[2593]: E0128 01:29:07.078367 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:07.136922 kubelet[2593]: I0128 01:29:07.100910 2593 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:29:07.136922 kubelet[2593]: I0128 01:29:07.101175 2593 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:29:07.136922 kubelet[2593]: W0128 01:29:07.112702 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:07.136922 kubelet[2593]: E0128 01:29:07.113086 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:07.136922 kubelet[2593]: E0128 01:29:07.113923 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Jan 28 01:29:08.197975 kubelet[2593]: I0128 01:29:08.180909 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:29:08.206861 kubelet[2593]: E0128 01:29:08.180908 2593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec0da33be09f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:29:07.006679542 +0000 UTC m=+2.695820565,LastTimestamp:2026-01-28 01:29:07.006679542 +0000 UTC m=+2.695820565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:29:08.206861 kubelet[2593]: E0128 01:29:08.202237 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:08.206861 kubelet[2593]: W0128 01:29:08.196170 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:08.224557 kubelet[2593]: E0128 01:29:08.224333 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:08.224557 kubelet[2593]: W0128 01:29:08.224095 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:08.224557 kubelet[2593]: E0128 01:29:08.224476 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:08.227006 kubelet[2593]: E0128 01:29:08.212438 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Jan 28 01:29:08.227006 kubelet[2593]: E0128 01:29:08.225702 2593 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:29:08.309533 kubelet[2593]: I0128 01:29:08.308099 2593 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:29:08.309533 kubelet[2593]: I0128 01:29:08.308129 2593 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:29:08.318683 kubelet[2593]: E0128 01:29:08.318371 2593 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:08.372437 kubelet[2593]: E0128 01:29:08.369406 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:08.514697 kubelet[2593]: E0128 01:29:08.497193 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:08.514697 kubelet[2593]: W0128 01:29:08.511024 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:08.514697 kubelet[2593]: E0128 01:29:08.511245 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:08.606388 kubelet[2593]: E0128 01:29:08.600154 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:08.618353 kubelet[2593]: I0128 01:29:08.617230 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:29:08.654991 kubelet[2593]: I0128 01:29:08.652997 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:29:08.654991 kubelet[2593]: I0128 01:29:08.653964 2593 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:29:08.658445 kubelet[2593]: E0128 01:29:08.658132 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Jan 28 01:29:08.658774 kubelet[2593]: I0128 01:29:08.658695 2593 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:29:08.658774 kubelet[2593]: I0128 01:29:08.658763 2593 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:29:08.660432 kubelet[2593]: E0128 01:29:08.659139 2593 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:29:08.662941 kubelet[2593]: W0128 01:29:08.662910 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:08.664105 kubelet[2593]: E0128 01:29:08.663490 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:08.701553 kubelet[2593]: E0128 01:29:08.701275 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:08.736201 kubelet[2593]: I0128 01:29:08.733538 2593 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:29:08.736201 kubelet[2593]: I0128 01:29:08.733569 2593 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:29:08.736201 kubelet[2593]: I0128 01:29:08.733704 2593 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:29:08.755858 kubelet[2593]: I0128 01:29:08.754907 2593 policy_none.go:49] "None policy: Start" Jan 28 01:29:08.755858 kubelet[2593]: I0128 01:29:08.755080 2593 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:29:08.755858 kubelet[2593]: I0128 01:29:08.755155 2593 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:29:08.761705 kubelet[2593]: E0128 01:29:08.761420 2593 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:29:08.802741 kubelet[2593]: I0128 01:29:08.800895 2593 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:29:08.802741 kubelet[2593]: I0128 01:29:08.801259 2593 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:29:08.802741 kubelet[2593]: I0128 01:29:08.801359 2593 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:29:08.807973 kubelet[2593]: E0128 01:29:08.803227 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:08.827395 kubelet[2593]: E0128 01:29:08.827353 2593 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:29:08.828066 kubelet[2593]: E0128 01:29:08.828034 2593 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:29:08.886358 kubelet[2593]: I0128 01:29:08.885912 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:29:09.064850 kubelet[2593]: I0128 01:29:09.043485 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c780a10a7bb0a222ca216ed0da21ec61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c780a10a7bb0a222ca216ed0da21ec61\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:09.064850 kubelet[2593]: I0128 01:29:09.054558 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c780a10a7bb0a222ca216ed0da21ec61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c780a10a7bb0a222ca216ed0da21ec61\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:09.064850 kubelet[2593]: I0128 01:29:09.054693 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c780a10a7bb0a222ca216ed0da21ec61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c780a10a7bb0a222ca216ed0da21ec61\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:09.079661 kubelet[2593]: I0128 01:29:09.076018 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:09.079661 kubelet[2593]: E0128 01:29:09.079096 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 28 01:29:09.113425 kubelet[2593]: E0128 01:29:09.112703 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:09.132179 kubelet[2593]: E0128 01:29:09.130513 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:09.139984 kubelet[2593]: E0128 01:29:09.138907 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:09.161504 kubelet[2593]: I0128 01:29:09.157886 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:09.161504 kubelet[2593]: I0128 01:29:09.157936 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:09.161504 kubelet[2593]: I0128 01:29:09.158041 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:29:09.161504 kubelet[2593]: I0128 01:29:09.158110 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:09.161504 kubelet[2593]: I0128 01:29:09.158148 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:09.162013 kubelet[2593]: I0128 01:29:09.158180 2593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:09.208811 kubelet[2593]: E0128 01:29:09.207171 2593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec0da33be09f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:29:07.006679542 +0000 UTC m=+2.695820565,LastTimestamp:2026-01-28 01:29:07.006679542 +0000 UTC m=+2.695820565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:29:09.286493 kubelet[2593]: I0128 01:29:09.284475 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:09.286493 kubelet[2593]: E0128 01:29:09.285040 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 28 01:29:09.422252 kubelet[2593]: E0128 01:29:09.420491 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:09.440437 containerd[1622]: time="2026-01-28T01:29:09.433409603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c780a10a7bb0a222ca216ed0da21ec61,Namespace:kube-system,Attempt:0,}" Jan 28 01:29:09.444861 kubelet[2593]: E0128 01:29:09.436805 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:09.444861 kubelet[2593]: E0128 01:29:09.439472 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:09.447249 containerd[1622]: time="2026-01-28T01:29:09.446057298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:29:09.447249 containerd[1622]: time="2026-01-28T01:29:09.446782992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:29:09.460766 kubelet[2593]: E0128 01:29:09.460707 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" Jan 28 01:29:09.704204 kubelet[2593]: I0128 01:29:09.703509 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:09.710329 kubelet[2593]: E0128 01:29:09.710086 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 28 01:29:09.710329 kubelet[2593]: W0128 01:29:09.710203 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:09.710329 kubelet[2593]: E0128 01:29:09.710275 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:10.519472 kubelet[2593]: I0128 01:29:10.518953 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:10.519472 kubelet[2593]: E0128 01:29:10.519366 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 28 01:29:10.561516 kubelet[2593]: W0128 01:29:10.559915 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:10.561516 kubelet[2593]: E0128 01:29:10.560037 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:10.576753 kubelet[2593]: W0128 01:29:10.576276 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:10.576753 kubelet[2593]: E0128 01:29:10.576986 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:10.745933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635859540.mount: Deactivated successfully. Jan 28 01:29:10.795585 containerd[1622]: time="2026-01-28T01:29:10.794077842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:29:10.818106 containerd[1622]: time="2026-01-28T01:29:10.814898481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:29:10.825118 containerd[1622]: time="2026-01-28T01:29:10.824456364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:29:10.844542 containerd[1622]: time="2026-01-28T01:29:10.837130329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:29:10.844542 containerd[1622]: time="2026-01-28T01:29:10.841442959Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:29:10.857986 containerd[1622]: time="2026-01-28T01:29:10.856468973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 01:29:10.862198 containerd[1622]: time="2026-01-28T01:29:10.857836893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:29:10.883898 containerd[1622]: time="2026-01-28T01:29:10.883800982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:29:10.898968 containerd[1622]: time="2026-01-28T01:29:10.894330257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.456784044s" Jan 28 01:29:10.901477 containerd[1622]: time="2026-01-28T01:29:10.901161472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.454929421s" Jan 28 01:29:10.913534 containerd[1622]: time="2026-01-28T01:29:10.911041427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.464193612s" Jan 28 01:29:10.962731 kubelet[2593]: W0128 01:29:10.962499 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:10.962731 kubelet[2593]: E0128 01:29:10.962690 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:11.079386 kubelet[2593]: E0128 01:29:11.067266 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="3.2s" Jan 28 01:29:11.731235 containerd[1622]: time="2026-01-28T01:29:11.723884204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:29:11.731235 containerd[1622]: time="2026-01-28T01:29:11.724304470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:29:11.739219 containerd[1622]: time="2026-01-28T01:29:11.738754269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:29:11.756169 containerd[1622]: time="2026-01-28T01:29:11.739122346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:29:11.766740 containerd[1622]: time="2026-01-28T01:29:11.765868066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:29:11.767132 containerd[1622]: time="2026-01-28T01:29:11.766729048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:29:11.767362 containerd[1622]: time="2026-01-28T01:29:11.767079663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:29:11.778565 containerd[1622]: time="2026-01-28T01:29:11.777837359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:29:11.867893 containerd[1622]: time="2026-01-28T01:29:11.839272789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:29:11.867893 containerd[1622]: time="2026-01-28T01:29:11.853384894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:29:11.867893 containerd[1622]: time="2026-01-28T01:29:11.854212824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:29:11.867893 containerd[1622]: time="2026-01-28T01:29:11.856712172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:29:12.101176 systemd[1]: run-containerd-runc-k8s.io-ea0e44c4548dce7a27a9bbb4e429cfb932396e050c9f7181bb09ae4a04494639-runc.1B0yC2.mount: Deactivated successfully. Jan 28 01:29:12.833312 kubelet[2593]: E0128 01:29:12.833038 2593 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:12.892068 kubelet[2593]: W0128 01:29:12.886251 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:12.929845 kubelet[2593]: E0128 01:29:12.908994 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:13.016176 kubelet[2593]: I0128 01:29:13.005311 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:13.329917 kubelet[2593]: E0128 01:29:13.322503 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 28 01:29:13.892288 containerd[1622]: time="2026-01-28T01:29:13.879246178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07aa3e424c7a1466817eda0cc49662afd40cae5a5c71e1a5760a2961f51bed1\"" Jan 28 01:29:13.932989 containerd[1622]: time="2026-01-28T01:29:13.923387639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c780a10a7bb0a222ca216ed0da21ec61,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0e44c4548dce7a27a9bbb4e429cfb932396e050c9f7181bb09ae4a04494639\"" Jan 28 01:29:14.031683 kubelet[2593]: E0128 01:29:14.023881 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:14.127707 kubelet[2593]: E0128 01:29:14.126152 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:14.134851 kubelet[2593]: E0128 01:29:14.130453 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:14.134918 containerd[1622]: time="2026-01-28T01:29:14.123110207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3ab0e05591d7a73e82b94c1eb3a6dfd33e5be7e4ec9325435b0abcb78f905a5\"" Jan 28 01:29:14.213468 containerd[1622]: time="2026-01-28T01:29:14.209044130Z" level=info msg="CreateContainer within sandbox \"e07aa3e424c7a1466817eda0cc49662afd40cae5a5c71e1a5760a2961f51bed1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:29:14.213468 containerd[1622]: time="2026-01-28T01:29:14.211725883Z" level=info msg="CreateContainer within sandbox \"ea0e44c4548dce7a27a9bbb4e429cfb932396e050c9f7181bb09ae4a04494639\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:29:14.213468 containerd[1622]: time="2026-01-28T01:29:14.212096052Z" level=info msg="CreateContainer within sandbox \"c3ab0e05591d7a73e82b94c1eb3a6dfd33e5be7e4ec9325435b0abcb78f905a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:29:14.310052 kubelet[2593]: E0128 01:29:14.305145 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="6.4s" Jan 28 01:29:14.683155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450407552.mount: Deactivated successfully. Jan 28 01:29:14.705840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238724046.mount: Deactivated successfully. Jan 28 01:29:14.729102 containerd[1622]: time="2026-01-28T01:29:14.726780518Z" level=info msg="CreateContainer within sandbox \"e07aa3e424c7a1466817eda0cc49662afd40cae5a5c71e1a5760a2961f51bed1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106\"" Jan 28 01:29:14.744082 containerd[1622]: time="2026-01-28T01:29:14.740068136Z" level=info msg="CreateContainer within sandbox \"ea0e44c4548dce7a27a9bbb4e429cfb932396e050c9f7181bb09ae4a04494639\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"203b8f7d5f62aa5364877d62467800120fe82d73bd9bd11e821f4e5caa6e2cdb\"" Jan 28 01:29:14.752911 containerd[1622]: time="2026-01-28T01:29:14.752518221Z" level=info msg="StartContainer for \"1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106\"" Jan 28 01:29:14.779743 containerd[1622]: time="2026-01-28T01:29:14.777710728Z" level=info msg="StartContainer for \"203b8f7d5f62aa5364877d62467800120fe82d73bd9bd11e821f4e5caa6e2cdb\"" Jan 28 01:29:14.893239 containerd[1622]: time="2026-01-28T01:29:14.872567068Z" level=info msg="CreateContainer within sandbox \"c3ab0e05591d7a73e82b94c1eb3a6dfd33e5be7e4ec9325435b0abcb78f905a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3\"" Jan 28 01:29:15.104713 containerd[1622]: time="2026-01-28T01:29:15.103227899Z" level=info msg="StartContainer for \"b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3\"" Jan 28 01:29:15.282472 kubelet[2593]: W0128 01:29:15.280388 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:15.298586 kubelet[2593]: E0128 01:29:15.293000 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:16.054039 containerd[1622]: time="2026-01-28T01:29:16.052830925Z" level=info msg="StartContainer for \"203b8f7d5f62aa5364877d62467800120fe82d73bd9bd11e821f4e5caa6e2cdb\" returns successfully" Jan 28 01:29:16.054039 containerd[1622]: time="2026-01-28T01:29:16.053131119Z" level=info msg="StartContainer for \"1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106\" returns successfully" Jan 28 01:29:16.236574 containerd[1622]: time="2026-01-28T01:29:16.183134060Z" level=info msg="StartContainer for \"b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3\" returns successfully" Jan 28 01:29:16.275348 kubelet[2593]: W0128 01:29:16.262482 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:16.275348 kubelet[2593]: E0128 01:29:16.275069 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:16.354561 kubelet[2593]: W0128 01:29:16.336941 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 28 01:29:16.356970 kubelet[2593]: E0128 01:29:16.356926 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:29:16.611584 kubelet[2593]: I0128 01:29:16.607961 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:16.616965 kubelet[2593]: E0128 01:29:16.616044 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 28 01:29:16.813756 kubelet[2593]: E0128 01:29:16.813486 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:16.834581 kubelet[2593]: E0128 01:29:16.819561 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:16.834581 kubelet[2593]: E0128 01:29:16.819007 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:16.834581 kubelet[2593]: E0128 01:29:16.833986 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:16.841515 kubelet[2593]: E0128 01:29:16.840867 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:16.841515 kubelet[2593]: E0128 01:29:16.841420 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:17.906450 kubelet[2593]: E0128 01:29:17.902766 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:17.919770 kubelet[2593]: E0128 01:29:17.908712 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:17.953724 kubelet[2593]: E0128 01:29:17.953180 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:17.953724 kubelet[2593]: E0128 01:29:17.953399 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:17.964960 kubelet[2593]: E0128 01:29:17.954709 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:17.966014 kubelet[2593]: E0128 01:29:17.965891 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:18.829881 kubelet[2593]: E0128 01:29:18.829535 2593 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:29:18.896267 kubelet[2593]: E0128 01:29:18.895321 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:18.904261 kubelet[2593]: E0128 01:29:18.902301 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:18.907188 kubelet[2593]: E0128 01:29:18.905209 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:18.907188 kubelet[2593]: E0128 01:29:18.905371 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:21.630843 kubelet[2593]: E0128 01:29:21.626582 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:21.660426 kubelet[2593]: E0128 01:29:21.638721 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:22.814273 kubelet[2593]: E0128 01:29:22.804855 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:22.814273 kubelet[2593]: E0128 01:29:22.813762 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:23.062752 kubelet[2593]: I0128 01:29:23.059795 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:24.860466 kubelet[2593]: E0128 01:29:24.860236 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:24.889515 kubelet[2593]: E0128 01:29:24.871020 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:28.072302 kubelet[2593]: W0128 01:29:28.067579 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:29:28.086885 kubelet[2593]: E0128 01:29:28.079280 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:29:28.837545 kubelet[2593]: E0128 01:29:28.836945 2593 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:29:29.464788 kubelet[2593]: E0128 01:29:29.458422 2593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188ec0da33be09f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:29:07.006679542 +0000 UTC m=+2.695820565,LastTimestamp:2026-01-28 01:29:07.006679542 +0000 UTC m=+2.695820565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:29:30.716089 kubelet[2593]: E0128 01:29:30.715817 2593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 01:29:31.853983 kubelet[2593]: E0128 01:29:31.699168 2593 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:29:32.972351 kubelet[2593]: W0128 01:29:32.969156 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:29:33.000132 kubelet[2593]: W0128 01:29:32.981700 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:29:33.000132 kubelet[2593]: E0128 01:29:32.981793 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:29:33.000132 kubelet[2593]: E0128 01:29:32.969310 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:29:33.038448 kubelet[2593]: E0128 01:29:33.035724 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:33.042900 kubelet[2593]: E0128 01:29:33.038946 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:33.062154 kubelet[2593]: E0128 01:29:33.062095 2593 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 28 01:29:33.281186 kubelet[2593]: W0128 01:29:33.273051 2593 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 01:29:33.281186 kubelet[2593]: E0128 01:29:33.276970 2593 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 01:29:37.783850 kubelet[2593]: E0128 01:29:37.770703 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:37.783850 kubelet[2593]: E0128 01:29:37.771062 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:38.106921 kubelet[2593]: E0128 01:29:38.076381 2593 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:29:38.851832 kubelet[2593]: E0128 01:29:38.848702 2593 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:29:39.157946 kubelet[2593]: E0128 01:29:39.156803 2593 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:29:40.129794 kubelet[2593]: I0128 01:29:40.120547 2593 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:40.236323 kubelet[2593]: I0128 01:29:40.235812 2593 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:29:40.236323 kubelet[2593]: E0128 01:29:40.236757 2593 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:29:40.503390 kubelet[2593]: E0128 01:29:40.496091 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:40.597035 kubelet[2593]: E0128 01:29:40.596978 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:40.702397 kubelet[2593]: E0128 01:29:40.702097 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:40.803423 kubelet[2593]: E0128 01:29:40.803357 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:40.907382 kubelet[2593]: E0128 01:29:40.904201 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.007695 kubelet[2593]: E0128 01:29:41.005430 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.116713 kubelet[2593]: E0128 01:29:41.106746 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.209463 kubelet[2593]: E0128 01:29:41.207382 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.321669 kubelet[2593]: E0128 01:29:41.311062 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.415742 kubelet[2593]: E0128 01:29:41.414749 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.514958 kubelet[2593]: E0128 01:29:41.514897 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.619076 kubelet[2593]: E0128 01:29:41.618442 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.746260 kubelet[2593]: E0128 01:29:41.737470 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.848378 kubelet[2593]: E0128 01:29:41.846772 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:41.959398 kubelet[2593]: E0128 01:29:41.958575 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.060562 kubelet[2593]: E0128 01:29:42.060286 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.166716 kubelet[2593]: E0128 01:29:42.165873 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.268890 kubelet[2593]: E0128 01:29:42.268313 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.375066 kubelet[2593]: E0128 01:29:42.369711 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.472285 kubelet[2593]: E0128 01:29:42.471327 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.572561 kubelet[2593]: E0128 01:29:42.572493 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.674162 kubelet[2593]: E0128 01:29:42.673461 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.782327 kubelet[2593]: E0128 01:29:42.774791 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.878995 kubelet[2593]: E0128 01:29:42.875918 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:42.977911 kubelet[2593]: E0128 01:29:42.977514 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.079962 kubelet[2593]: E0128 01:29:43.079882 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.182587 kubelet[2593]: E0128 01:29:43.182491 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.288667 kubelet[2593]: E0128 01:29:43.285980 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.390564 kubelet[2593]: E0128 01:29:43.389773 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.494031 kubelet[2593]: E0128 01:29:43.493964 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.597843 kubelet[2593]: E0128 01:29:43.594868 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.696135 kubelet[2593]: E0128 01:29:43.695703 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.806459 kubelet[2593]: E0128 01:29:43.806029 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:43.917067 kubelet[2593]: E0128 01:29:43.916907 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.017500 kubelet[2593]: E0128 01:29:44.017447 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.124132 kubelet[2593]: E0128 01:29:44.123973 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.263671 kubelet[2593]: E0128 01:29:44.242527 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.343082 kubelet[2593]: E0128 01:29:44.343026 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.461345 kubelet[2593]: E0128 01:29:44.459436 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.564418 kubelet[2593]: E0128 01:29:44.561436 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.670710 kubelet[2593]: E0128 01:29:44.670076 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.774174 kubelet[2593]: E0128 01:29:44.773462 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.880468 kubelet[2593]: E0128 01:29:44.878912 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:44.980246 kubelet[2593]: E0128 01:29:44.979759 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.081896 kubelet[2593]: E0128 01:29:45.080881 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.182690 kubelet[2593]: E0128 01:29:45.181192 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.284339 kubelet[2593]: E0128 01:29:45.283082 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.387269 kubelet[2593]: E0128 01:29:45.383444 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.487161 kubelet[2593]: E0128 01:29:45.484765 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.588587 kubelet[2593]: E0128 01:29:45.586551 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.687906 kubelet[2593]: E0128 01:29:45.686900 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.798206 kubelet[2593]: E0128 01:29:45.798069 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:45.899980 kubelet[2593]: E0128 01:29:45.899459 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.007228 kubelet[2593]: E0128 01:29:46.000745 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.115845 kubelet[2593]: E0128 01:29:46.114719 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.218403 kubelet[2593]: E0128 01:29:46.215890 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.319722 kubelet[2593]: E0128 01:29:46.319057 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.424936 kubelet[2593]: E0128 01:29:46.423343 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.531389 kubelet[2593]: E0128 01:29:46.525042 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.631867 kubelet[2593]: E0128 01:29:46.630562 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.732821 kubelet[2593]: E0128 01:29:46.731966 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.848860 kubelet[2593]: E0128 01:29:46.848795 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:46.949031 kubelet[2593]: E0128 01:29:46.948963 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.051134 kubelet[2593]: E0128 01:29:47.050417 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.151792 kubelet[2593]: E0128 01:29:47.151720 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.255715 kubelet[2593]: E0128 01:29:47.253475 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.360351 kubelet[2593]: E0128 01:29:47.357448 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.467723 kubelet[2593]: E0128 01:29:47.462714 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.564219 kubelet[2593]: E0128 01:29:47.564162 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.666480 kubelet[2593]: E0128 01:29:47.666321 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.779152 kubelet[2593]: E0128 01:29:47.779044 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:47.880001 kubelet[2593]: E0128 01:29:47.879793 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.002825 kubelet[2593]: E0128 01:29:48.000307 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.110288 kubelet[2593]: E0128 01:29:48.109532 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.210366 kubelet[2593]: E0128 01:29:48.210325 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.312362 kubelet[2593]: E0128 01:29:48.312290 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.416142 kubelet[2593]: E0128 01:29:48.416004 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.522366 kubelet[2593]: E0128 01:29:48.521843 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.631912 kubelet[2593]: E0128 01:29:48.630971 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.736656 kubelet[2593]: E0128 01:29:48.736384 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.837706 kubelet[2593]: E0128 01:29:48.836974 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:48.859811 kubelet[2593]: E0128 01:29:48.858818 2593 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:29:48.944332 kubelet[2593]: E0128 01:29:48.941113 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.057496 kubelet[2593]: E0128 01:29:49.055529 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.169723 kubelet[2593]: E0128 01:29:49.159340 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.259723 kubelet[2593]: E0128 01:29:49.259586 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.361553 kubelet[2593]: E0128 01:29:49.361457 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.474265 kubelet[2593]: E0128 01:29:49.471547 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.600128 kubelet[2593]: E0128 01:29:49.575721 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.683110 kubelet[2593]: E0128 01:29:49.680099 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.786004 kubelet[2593]: E0128 01:29:49.784520 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:49.903864 kubelet[2593]: E0128 01:29:49.892235 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.015321 kubelet[2593]: E0128 01:29:50.013563 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.114412 kubelet[2593]: E0128 01:29:50.114354 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.247700 kubelet[2593]: E0128 01:29:50.238550 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.344711 kubelet[2593]: E0128 01:29:50.344558 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.450521 kubelet[2593]: E0128 01:29:50.449978 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.590436 kubelet[2593]: E0128 01:29:50.558128 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.697712 kubelet[2593]: E0128 01:29:50.696365 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.810780 kubelet[2593]: E0128 01:29:50.801882 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:50.810780 kubelet[2593]: E0128 01:29:50.827765 2593 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:29:51.118863 kubelet[2593]: E0128 01:29:51.118791 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.305391 kubelet[2593]: E0128 01:29:51.286839 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.401825 kubelet[2593]: E0128 01:29:51.388549 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.489454 kubelet[2593]: E0128 01:29:51.489268 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.600710 kubelet[2593]: E0128 01:29:51.599772 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.713805 kubelet[2593]: E0128 01:29:51.713555 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.932203 kubelet[2593]: E0128 01:29:51.919166 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:51.932203 kubelet[2593]: E0128 01:29:51.928493 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:51.932203 kubelet[2593]: E0128 01:29:51.935894 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:52.012921 systemd[1]: Reloading requested from client PID 2878 ('systemctl') (unit session-9.scope)... Jan 28 01:29:52.013590 systemd[1]: Reloading... Jan 28 01:29:52.037095 kubelet[2593]: E0128 01:29:52.036874 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.194066 kubelet[2593]: E0128 01:29:52.155359 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.293322 kubelet[2593]: E0128 01:29:52.291736 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.335258 kubelet[2593]: E0128 01:29:52.335220 2593 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:29:52.336953 kubelet[2593]: E0128 01:29:52.336809 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:52.392074 kubelet[2593]: E0128 01:29:52.391976 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.510081 kubelet[2593]: E0128 01:29:52.506389 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.609506 kubelet[2593]: E0128 01:29:52.608737 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.711951 kubelet[2593]: E0128 01:29:52.711706 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.814564 kubelet[2593]: E0128 01:29:52.813588 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:52.935120 kubelet[2593]: E0128 01:29:52.921505 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.039222 kubelet[2593]: E0128 01:29:53.038908 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.070534 zram_generator::config[2923]: No configuration found. Jan 28 01:29:53.150103 kubelet[2593]: E0128 01:29:53.140813 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.248766 kubelet[2593]: E0128 01:29:53.247194 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.356819 kubelet[2593]: E0128 01:29:53.355429 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.461258 kubelet[2593]: E0128 01:29:53.461209 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.572570 kubelet[2593]: E0128 01:29:53.570784 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.677411 kubelet[2593]: E0128 01:29:53.677320 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.888884 kubelet[2593]: E0128 01:29:53.793882 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:53.904754 kubelet[2593]: E0128 01:29:53.904557 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.009097 kubelet[2593]: E0128 01:29:54.005319 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.112263 kubelet[2593]: E0128 01:29:54.109750 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.215560 kubelet[2593]: E0128 01:29:54.214065 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.326842 kubelet[2593]: E0128 01:29:54.326785 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.431486 kubelet[2593]: E0128 01:29:54.427732 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.435135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:29:54.538740 kubelet[2593]: E0128 01:29:54.538443 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.642245 kubelet[2593]: E0128 01:29:54.641835 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.744574 kubelet[2593]: E0128 01:29:54.744215 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.849794 kubelet[2593]: E0128 01:29:54.848812 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:54.954532 kubelet[2593]: E0128 01:29:54.954247 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.057377 kubelet[2593]: E0128 01:29:55.057106 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.162589 kubelet[2593]: E0128 01:29:55.162378 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.195758 systemd[1]: Reloading finished in 3180 ms. Jan 28 01:29:55.264816 kubelet[2593]: E0128 01:29:55.264005 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.365679 kubelet[2593]: E0128 01:29:55.365439 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.484775 kubelet[2593]: E0128 01:29:55.476579 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.577210 kubelet[2593]: E0128 01:29:55.576912 2593 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:29:55.588689 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:29:55.679752 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:29:55.680425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:29:55.719148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:29:56.939183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:29:57.003370 (kubelet)[2972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:29:57.726336 kubelet[2972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:29:57.726336 kubelet[2972]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:29:57.726336 kubelet[2972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:29:57.726336 kubelet[2972]: I0128 01:29:57.725259 2972 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:29:57.855490 kubelet[2972]: I0128 01:29:57.855434 2972 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:29:57.857350 kubelet[2972]: I0128 01:29:57.855968 2972 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:29:57.857350 kubelet[2972]: I0128 01:29:57.856306 2972 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:29:57.897140 kubelet[2972]: I0128 01:29:57.894691 2972 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:29:57.933382 kubelet[2972]: I0128 01:29:57.928036 2972 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:29:58.201483 kubelet[2972]: E0128 01:29:58.201418 2972 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:29:58.201483 kubelet[2972]: I0128 01:29:58.201484 2972 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:29:58.234696 kubelet[2972]: I0128 01:29:58.234450 2972 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:29:58.241004 kubelet[2972]: I0128 01:29:58.240540 2972 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:29:58.246852 kubelet[2972]: I0128 01:29:58.244383 2972 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:29:58.254182 kubelet[2972]: I0128 01:29:58.252281 2972 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:29:58.254182 kubelet[2972]: I0128 01:29:58.252323 2972 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:29:58.254182 kubelet[2972]: I0128 01:29:58.252410 2972 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:29:58.267095 kubelet[2972]: I0128 01:29:58.262891 2972 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:29:58.267095 kubelet[2972]: I0128 01:29:58.263007 2972 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:29:58.267095 kubelet[2972]: I0128 01:29:58.263036 2972 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:29:58.267095 kubelet[2972]: I0128 01:29:58.263050 2972 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:29:58.292458 kubelet[2972]: I0128 01:29:58.278383 2972 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:29:58.303464 kubelet[2972]: I0128 01:29:58.293371 2972 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:29:58.303464 kubelet[2972]: I0128 01:29:58.295697 2972 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:29:58.309799 kubelet[2972]: I0128 01:29:58.303583 2972 server.go:1287] "Started kubelet" Jan 28 01:29:58.326207 kubelet[2972]: I0128 01:29:58.324707 2972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:29:58.335328 kubelet[2972]: I0128 01:29:58.334703 2972 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:29:58.377764 kubelet[2972]: I0128 01:29:58.376759 2972 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:29:58.377764 kubelet[2972]: I0128 01:29:58.377001 2972 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:29:58.387793 kubelet[2972]: I0128 01:29:58.387005 2972 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:29:58.395368 kubelet[2972]: I0128 01:29:58.388294 2972 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:29:58.395368 kubelet[2972]: I0128 01:29:58.388724 2972 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:29:58.405529 kubelet[2972]: I0128 01:29:58.405160 2972 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:29:58.407287 kubelet[2972]: I0128 01:29:58.407228 2972 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:29:58.421402 kubelet[2972]: I0128 01:29:58.418161 2972 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:29:58.432687 kubelet[2972]: I0128 01:29:58.428114 2972 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:29:58.445163 kubelet[2972]: I0128 01:29:58.445133 2972 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:29:58.449554 kubelet[2972]: E0128 01:29:58.449484 2972 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:29:58.527781 kubelet[2972]: I0128 01:29:58.527687 2972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:29:58.556159 kubelet[2972]: I0128 01:29:58.553251 2972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:29:58.556159 kubelet[2972]: I0128 01:29:58.553294 2972 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:29:58.556159 kubelet[2972]: I0128 01:29:58.553323 2972 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:29:58.556159 kubelet[2972]: I0128 01:29:58.553335 2972 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:29:58.556159 kubelet[2972]: E0128 01:29:58.553407 2972 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:29:58.653776 kubelet[2972]: E0128 01:29:58.653734 2972 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:29:58.854880 kubelet[2972]: E0128 01:29:58.854401 2972 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.915843 2972 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.915863 2972 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.915893 2972 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.916286 2972 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.916306 2972 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.916334 2972 policy_none.go:49] "None policy: Start" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.916347 2972 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.916364 2972 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:29:58.919200 kubelet[2972]: I0128 01:29:58.916514 2972 state_mem.go:75] "Updated machine memory state" Jan 28 01:29:58.926537 kubelet[2972]: I0128 01:29:58.926273 2972 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:29:58.926782 kubelet[2972]: I0128 01:29:58.926749 2972 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:29:58.926830 kubelet[2972]: I0128 01:29:58.926772 2972 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:29:58.935210 kubelet[2972]: I0128 01:29:58.932778 2972 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:29:58.941040 kubelet[2972]: E0128 01:29:58.938732 2972 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:29:58.954264 kubelet[2972]: I0128 01:29:58.954228 2972 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:29:58.957806 containerd[1622]: time="2026-01-28T01:29:58.955545415Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:29:58.959011 kubelet[2972]: I0128 01:29:58.955955 2972 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:29:59.187273 kubelet[2972]: I0128 01:29:59.179885 2972 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:29:59.258291 kubelet[2972]: I0128 01:29:59.257743 2972 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:29:59.260730 kubelet[2972]: I0128 01:29:59.260474 2972 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:59.265588 kubelet[2972]: I0128 01:29:59.261737 2972 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:59.267583 kubelet[2972]: I0128 01:29:59.267373 2972 apiserver.go:52] "Watching apiserver" Jan 28 01:29:59.281860 kubelet[2972]: I0128 01:29:59.268278 2972 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:29:59.281860 kubelet[2972]: I0128 01:29:59.268371 2972 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:29:59.318217 kubelet[2972]: I0128 01:29:59.316949 2972 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:29:59.338669 kubelet[2972]: I0128 01:29:59.336498 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l774h\" (UniqueName: \"kubernetes.io/projected/a3eaa1b9-f820-4985-a022-81509f53fef1-kube-api-access-l774h\") pod \"kube-proxy-nnb7t\" (UID: \"a3eaa1b9-f820-4985-a022-81509f53fef1\") " pod="kube-system/kube-proxy-nnb7t" Jan 28 01:29:59.338669 kubelet[2972]: I0128 01:29:59.336691 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c780a10a7bb0a222ca216ed0da21ec61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c780a10a7bb0a222ca216ed0da21ec61\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:59.338669 kubelet[2972]: I0128 01:29:59.336747 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c780a10a7bb0a222ca216ed0da21ec61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c780a10a7bb0a222ca216ed0da21ec61\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:59.338669 kubelet[2972]: I0128 01:29:59.336785 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:59.338669 kubelet[2972]: I0128 01:29:59.336824 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:59.356042 kubelet[2972]: I0128 01:29:59.336851 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3eaa1b9-f820-4985-a022-81509f53fef1-kube-proxy\") pod \"kube-proxy-nnb7t\" (UID: \"a3eaa1b9-f820-4985-a022-81509f53fef1\") " pod="kube-system/kube-proxy-nnb7t" Jan 28 01:29:59.356042 kubelet[2972]: I0128 01:29:59.336882 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3eaa1b9-f820-4985-a022-81509f53fef1-xtables-lock\") pod \"kube-proxy-nnb7t\" (UID: \"a3eaa1b9-f820-4985-a022-81509f53fef1\") " pod="kube-system/kube-proxy-nnb7t" Jan 28 01:29:59.356042 kubelet[2972]: I0128 01:29:59.336983 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3eaa1b9-f820-4985-a022-81509f53fef1-lib-modules\") pod \"kube-proxy-nnb7t\" (UID: \"a3eaa1b9-f820-4985-a022-81509f53fef1\") " pod="kube-system/kube-proxy-nnb7t" Jan 28 01:29:59.356042 kubelet[2972]: I0128 01:29:59.337022 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:59.356042 kubelet[2972]: I0128 01:29:59.337059 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:59.357201 kubelet[2972]: I0128 01:29:59.337089 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:29:59.357201 kubelet[2972]: I0128 01:29:59.337119 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c780a10a7bb0a222ca216ed0da21ec61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c780a10a7bb0a222ca216ed0da21ec61\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:29:59.357201 kubelet[2972]: I0128 01:29:59.337160 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:29:59.669126 kubelet[2972]: E0128 01:29:59.635222 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:59.669543 kubelet[2972]: E0128 01:29:59.669485 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:59.684995 kubelet[2972]: E0128 01:29:59.676253 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:59.736116 kubelet[2972]: E0128 01:29:59.704400 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:59.736116 kubelet[2972]: E0128 01:29:59.705118 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:59.736116 kubelet[2972]: E0128 01:29:59.705556 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:00.016491 kubelet[2972]: E0128 01:30:00.002521 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:00.060521 kubelet[2972]: I0128 01:30:00.053847 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.053515078 podStartE2EDuration="1.053515078s" podCreationTimestamp="2026-01-28 01:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:30:00.037133266 +0000 UTC m=+3.000821560" watchObservedRunningTime="2026-01-28 01:30:00.053515078 +0000 UTC m=+3.017203374" Jan 28 01:30:00.067777 kubelet[2972]: I0128 01:30:00.065520 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.065492382 podStartE2EDuration="1.065492382s" podCreationTimestamp="2026-01-28 01:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:29:59.852724402 +0000 UTC m=+2.816412666" watchObservedRunningTime="2026-01-28 01:30:00.065492382 +0000 UTC m=+3.029180677" Jan 28 01:30:00.092277 containerd[1622]: time="2026-01-28T01:30:00.092210229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnb7t,Uid:a3eaa1b9-f820-4985-a022-81509f53fef1,Namespace:kube-system,Attempt:0,}" Jan 28 01:30:00.377517 kubelet[2972]: I0128 01:30:00.364945 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.364844112 podStartE2EDuration="1.364844112s" podCreationTimestamp="2026-01-28 01:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:30:00.196167683 +0000 UTC m=+3.159855958" watchObservedRunningTime="2026-01-28 01:30:00.364844112 +0000 UTC m=+3.328532377" Jan 28 01:30:00.581767 containerd[1622]: time="2026-01-28T01:30:00.579303808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:30:00.581767 containerd[1622]: time="2026-01-28T01:30:00.579375300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:30:00.581767 containerd[1622]: time="2026-01-28T01:30:00.579389507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:30:00.581767 containerd[1622]: time="2026-01-28T01:30:00.579510782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:30:00.722859 kubelet[2972]: E0128 01:30:00.721379 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:00.722859 kubelet[2972]: E0128 01:30:00.722167 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:01.036088 containerd[1622]: time="2026-01-28T01:30:01.033314123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nnb7t,Uid:a3eaa1b9-f820-4985-a022-81509f53fef1,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ef08ea6ba7c7c34758c0f1d5f6d473a9519ca7f515f9b2f39ad35924758d8d\"" Jan 28 01:30:01.036351 kubelet[2972]: E0128 01:30:01.035461 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:01.058585 containerd[1622]: time="2026-01-28T01:30:01.056837857Z" level=info msg="CreateContainer within sandbox \"49ef08ea6ba7c7c34758c0f1d5f6d473a9519ca7f515f9b2f39ad35924758d8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:30:01.138713 containerd[1622]: time="2026-01-28T01:30:01.138511067Z" level=info msg="CreateContainer within sandbox \"49ef08ea6ba7c7c34758c0f1d5f6d473a9519ca7f515f9b2f39ad35924758d8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6704239987c62c1e563fc1432ab4427c2ca6b1fcfcd41861ff289705dc21e526\"" Jan 28 01:30:01.152968 containerd[1622]: time="2026-01-28T01:30:01.152869239Z" level=info msg="StartContainer for \"6704239987c62c1e563fc1432ab4427c2ca6b1fcfcd41861ff289705dc21e526\"" Jan 28 01:30:01.910117 kubelet[2972]: E0128 01:30:01.910083 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:01.910849 kubelet[2972]: E0128 01:30:01.910493 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:01.975289 containerd[1622]: time="2026-01-28T01:30:01.975149057Z" level=info msg="StartContainer for \"6704239987c62c1e563fc1432ab4427c2ca6b1fcfcd41861ff289705dc21e526\" returns successfully" Jan 28 01:30:02.962209 kubelet[2972]: E0128 01:30:02.961573 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:03.960760 kubelet[2972]: E0128 01:30:03.960478 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:04.486540 kubelet[2972]: I0128 01:30:04.486476 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nnb7t" podStartSLOduration=7.486450897 podStartE2EDuration="7.486450897s" podCreationTimestamp="2026-01-28 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:30:03.110445901 +0000 UTC m=+6.074134166" watchObservedRunningTime="2026-01-28 01:30:04.486450897 +0000 UTC m=+7.450139162" Jan 28 01:30:04.507082 kubelet[2972]: I0128 01:30:04.504185 2972 status_manager.go:890] "Failed to get status for pod" podUID="9dd686f8-eeff-47cc-aae5-295b686a5001" pod="tigera-operator/tigera-operator-7dcd859c48-q5fgx" err="pods \"tigera-operator-7dcd859c48-q5fgx\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" Jan 28 01:30:04.691381 kubelet[2972]: I0128 01:30:04.691097 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5w9\" (UniqueName: \"kubernetes.io/projected/9dd686f8-eeff-47cc-aae5-295b686a5001-kube-api-access-8c5w9\") pod \"tigera-operator-7dcd859c48-q5fgx\" (UID: \"9dd686f8-eeff-47cc-aae5-295b686a5001\") " pod="tigera-operator/tigera-operator-7dcd859c48-q5fgx" Jan 28 01:30:04.691381 kubelet[2972]: I0128 01:30:04.691195 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9dd686f8-eeff-47cc-aae5-295b686a5001-var-lib-calico\") pod \"tigera-operator-7dcd859c48-q5fgx\" (UID: \"9dd686f8-eeff-47cc-aae5-295b686a5001\") " pod="tigera-operator/tigera-operator-7dcd859c48-q5fgx" Jan 28 01:30:05.129307 containerd[1622]: time="2026-01-28T01:30:05.125565127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-q5fgx,Uid:9dd686f8-eeff-47cc-aae5-295b686a5001,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:30:05.392823 containerd[1622]: time="2026-01-28T01:30:05.384220548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:30:05.392823 containerd[1622]: time="2026-01-28T01:30:05.384363845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:30:05.392823 containerd[1622]: time="2026-01-28T01:30:05.384385975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:30:05.392823 containerd[1622]: time="2026-01-28T01:30:05.384747767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:30:05.770970 containerd[1622]: time="2026-01-28T01:30:05.770918840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-q5fgx,Uid:9dd686f8-eeff-47cc-aae5-295b686a5001,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b2bacea78679e188a9c3aeb8170c25ae9e292cbb205ba2098a88eb1e104290f\"" Jan 28 01:30:05.777197 containerd[1622]: time="2026-01-28T01:30:05.777118618Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:30:07.031902 kubelet[2972]: E0128 01:30:07.030767 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:07.847054 kubelet[2972]: E0128 01:30:07.846353 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:07.972793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351048364.mount: Deactivated successfully. Jan 28 01:30:08.031576 kubelet[2972]: E0128 01:30:08.026574 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:08.031576 kubelet[2972]: E0128 01:30:08.031283 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:08.258942 kubelet[2972]: E0128 01:30:08.250510 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:09.033026 kubelet[2972]: E0128 01:30:09.030924 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:15.964475 containerd[1622]: time="2026-01-28T01:30:15.964032387Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:30:15.981736 containerd[1622]: time="2026-01-28T01:30:15.981576198Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 01:30:15.993234 containerd[1622]: time="2026-01-28T01:30:15.993172022Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:30:16.016079 containerd[1622]: time="2026-01-28T01:30:16.012485293Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:30:16.016079 containerd[1622]: time="2026-01-28T01:30:16.015111396Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 10.237949627s" Jan 28 01:30:16.016079 containerd[1622]: time="2026-01-28T01:30:16.015173681Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:30:16.056202 containerd[1622]: time="2026-01-28T01:30:16.046542373Z" level=info msg="CreateContainer within sandbox \"1b2bacea78679e188a9c3aeb8170c25ae9e292cbb205ba2098a88eb1e104290f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:30:16.216245 containerd[1622]: time="2026-01-28T01:30:16.215997665Z" level=info msg="CreateContainer within sandbox \"1b2bacea78679e188a9c3aeb8170c25ae9e292cbb205ba2098a88eb1e104290f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d\"" Jan 28 01:30:16.217323 containerd[1622]: time="2026-01-28T01:30:16.217215988Z" level=info msg="StartContainer for \"666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d\"" Jan 28 01:30:16.905436 containerd[1622]: time="2026-01-28T01:30:16.905251508Z" level=info msg="StartContainer for \"666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d\" returns successfully" Jan 28 01:30:17.328468 kubelet[2972]: I0128 01:30:17.327879 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-q5fgx" podStartSLOduration=3.063833803 podStartE2EDuration="13.318937771s" podCreationTimestamp="2026-01-28 01:30:04 +0000 UTC" firstStartedPulling="2026-01-28 01:30:05.77389735 +0000 UTC m=+8.737585645" lastFinishedPulling="2026-01-28 01:30:16.029001349 +0000 UTC m=+18.992689613" observedRunningTime="2026-01-28 01:30:17.253405758 +0000 UTC m=+20.217094033" watchObservedRunningTime="2026-01-28 01:30:17.318937771 +0000 UTC m=+20.282626047" Jan 28 01:30:32.492024 systemd-journald[1193]: Under memory pressure, flushing caches. Jan 28 01:30:32.413406 systemd-resolved[1501]: Under memory pressure, flushing caches. Jan 28 01:30:32.413471 systemd-resolved[1501]: Flushed all caches. Jan 28 01:30:32.677960 kubelet[2972]: E0128 01:30:32.667943 2972 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.431s" Jan 28 01:30:33.650504 kubelet[2972]: E0128 01:30:33.643493 2972 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod9dd686f8-eeff-47cc-aae5-295b686a5001/666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d\": RecentStats: unable to find data in memory cache]" Jan 28 01:30:33.694433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d-rootfs.mount: Deactivated successfully. Jan 28 01:30:34.180559 containerd[1622]: time="2026-01-28T01:30:34.179577391Z" level=info msg="shim disconnected" id=666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d namespace=k8s.io Jan 28 01:30:34.190524 containerd[1622]: time="2026-01-28T01:30:34.186374710Z" level=warning msg="cleaning up after shim disconnected" id=666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d namespace=k8s.io Jan 28 01:30:34.190524 containerd[1622]: time="2026-01-28T01:30:34.186414184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:30:35.187428 kubelet[2972]: I0128 01:30:35.185713 2972 scope.go:117] "RemoveContainer" containerID="666a6a852952882d5498e044d75c209cb4e7035cd5903bc6c7d3fd6f0440b64d" Jan 28 01:30:35.277204 containerd[1622]: time="2026-01-28T01:30:35.250490981Z" level=info msg="CreateContainer within sandbox \"1b2bacea78679e188a9c3aeb8170c25ae9e292cbb205ba2098a88eb1e104290f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 28 01:30:35.587479 containerd[1622]: time="2026-01-28T01:30:35.586834812Z" level=info msg="CreateContainer within sandbox \"1b2bacea78679e188a9c3aeb8170c25ae9e292cbb205ba2098a88eb1e104290f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a0de9f7f4e9152ed2647ed5d7552a1cade726f5236f337df4ae6785babfa27fc\"" Jan 28 01:30:35.600411 containerd[1622]: time="2026-01-28T01:30:35.597234968Z" level=info msg="StartContainer for \"a0de9f7f4e9152ed2647ed5d7552a1cade726f5236f337df4ae6785babfa27fc\"" Jan 28 01:30:36.225587 containerd[1622]: time="2026-01-28T01:30:36.225324103Z" level=info msg="StartContainer for \"a0de9f7f4e9152ed2647ed5d7552a1cade726f5236f337df4ae6785babfa27fc\" returns successfully" Jan 28 01:30:38.216871 sudo[1836]: pam_unix(sudo:session): session closed for user root Jan 28 01:30:38.258674 sshd[1829]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:38.268092 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:59010.service: Deactivated successfully. Jan 28 01:30:38.291719 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:30:38.297039 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:30:38.339693 systemd-logind[1612]: Removed session 9. Jan 28 01:31:12.874104 kubelet[2972]: I0128 01:31:12.847537 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9388869f-bdfb-4dba-b743-7efcf5a11c09-typha-certs\") pod \"calico-typha-5ff799b947-rjdp5\" (UID: \"9388869f-bdfb-4dba-b743-7efcf5a11c09\") " pod="calico-system/calico-typha-5ff799b947-rjdp5" Jan 28 01:31:12.874104 kubelet[2972]: I0128 01:31:12.847699 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9388869f-bdfb-4dba-b743-7efcf5a11c09-tigera-ca-bundle\") pod \"calico-typha-5ff799b947-rjdp5\" (UID: \"9388869f-bdfb-4dba-b743-7efcf5a11c09\") " pod="calico-system/calico-typha-5ff799b947-rjdp5" Jan 28 01:31:12.874104 kubelet[2972]: I0128 01:31:12.847734 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kq52\" (UniqueName: \"kubernetes.io/projected/9388869f-bdfb-4dba-b743-7efcf5a11c09-kube-api-access-7kq52\") pod \"calico-typha-5ff799b947-rjdp5\" (UID: \"9388869f-bdfb-4dba-b743-7efcf5a11c09\") " pod="calico-system/calico-typha-5ff799b947-rjdp5" Jan 28 01:31:13.149233 kubelet[2972]: E0128 01:31:13.138336 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:13.178722 containerd[1622]: time="2026-01-28T01:31:13.176065964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ff799b947-rjdp5,Uid:9388869f-bdfb-4dba-b743-7efcf5a11c09,Namespace:calico-system,Attempt:0,}" Jan 28 01:31:13.483341 kubelet[2972]: I0128 01:31:13.471830 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b1cff35a-4ecc-4216-ba16-440cc319454f-node-certs\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.483341 kubelet[2972]: I0128 01:31:13.472375 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-var-run-calico\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.483341 kubelet[2972]: I0128 01:31:13.472421 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmqn\" (UniqueName: \"kubernetes.io/projected/b1cff35a-4ecc-4216-ba16-440cc319454f-kube-api-access-trmqn\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.483341 kubelet[2972]: I0128 01:31:13.472520 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-flexvol-driver-host\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.483341 kubelet[2972]: I0128 01:31:13.472552 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-policysync\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502326 kubelet[2972]: I0128 01:31:13.472583 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-var-lib-calico\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502326 kubelet[2972]: I0128 01:31:13.472898 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-cni-log-dir\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502326 kubelet[2972]: I0128 01:31:13.473066 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-cni-net-dir\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502326 kubelet[2972]: I0128 01:31:13.473237 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-lib-modules\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502326 kubelet[2972]: I0128 01:31:13.473403 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-xtables-lock\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502870 kubelet[2972]: I0128 01:31:13.474181 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b1cff35a-4ecc-4216-ba16-440cc319454f-cni-bin-dir\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.502870 kubelet[2972]: I0128 01:31:13.474343 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1cff35a-4ecc-4216-ba16-440cc319454f-tigera-ca-bundle\") pod \"calico-node-dxbgw\" (UID: \"b1cff35a-4ecc-4216-ba16-440cc319454f\") " pod="calico-system/calico-node-dxbgw" Jan 28 01:31:13.577300 containerd[1622]: time="2026-01-28T01:31:13.576049639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:31:13.583412 containerd[1622]: time="2026-01-28T01:31:13.583026906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:31:13.583412 containerd[1622]: time="2026-01-28T01:31:13.583056403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:31:13.583412 containerd[1622]: time="2026-01-28T01:31:13.583196035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:31:13.628439 kubelet[2972]: E0128 01:31:13.628080 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.628439 kubelet[2972]: W0128 01:31:13.628125 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.628439 kubelet[2972]: E0128 01:31:13.628322 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.698213 kubelet[2972]: E0128 01:31:13.697987 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.698213 kubelet[2972]: W0128 01:31:13.698108 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.698791 kubelet[2972]: E0128 01:31:13.698136 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.744274 kubelet[2972]: E0128 01:31:13.735813 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:13.770965 kubelet[2972]: E0128 01:31:13.770868 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.770965 kubelet[2972]: W0128 01:31:13.770901 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.770965 kubelet[2972]: E0128 01:31:13.770930 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.774823 kubelet[2972]: E0128 01:31:13.772814 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.774823 kubelet[2972]: W0128 01:31:13.772835 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.774823 kubelet[2972]: E0128 01:31:13.772850 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.774823 kubelet[2972]: E0128 01:31:13.774199 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.774823 kubelet[2972]: W0128 01:31:13.774212 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.774823 kubelet[2972]: E0128 01:31:13.774226 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.776156 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.787783 kubelet[2972]: W0128 01:31:13.777698 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.777720 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.779728 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.787783 kubelet[2972]: W0128 01:31:13.779892 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.779906 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.782204 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.787783 kubelet[2972]: W0128 01:31:13.782217 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.782231 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.787783 kubelet[2972]: E0128 01:31:13.786169 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.788233 kubelet[2972]: W0128 01:31:13.786181 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.788233 kubelet[2972]: E0128 01:31:13.786193 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.790414 kubelet[2972]: E0128 01:31:13.789780 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.790414 kubelet[2972]: W0128 01:31:13.789830 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.790414 kubelet[2972]: E0128 01:31:13.789844 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.800840 kubelet[2972]: E0128 01:31:13.800791 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.800840 kubelet[2972]: W0128 01:31:13.800812 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.800840 kubelet[2972]: E0128 01:31:13.800826 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.831930 kubelet[2972]: E0128 01:31:13.805796 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.831930 kubelet[2972]: W0128 01:31:13.805810 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.831930 kubelet[2972]: E0128 01:31:13.805823 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.831930 kubelet[2972]: E0128 01:31:13.830760 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.831930 kubelet[2972]: W0128 01:31:13.830795 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.831930 kubelet[2972]: E0128 01:31:13.831190 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.847049 kubelet[2972]: E0128 01:31:13.847028 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.847438 kubelet[2972]: W0128 01:31:13.847127 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.847438 kubelet[2972]: E0128 01:31:13.847153 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.847821 kubelet[2972]: E0128 01:31:13.847807 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.847978 kubelet[2972]: W0128 01:31:13.847961 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.848119 kubelet[2972]: E0128 01:31:13.848102 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.848423 kubelet[2972]: E0128 01:31:13.848410 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.848738 kubelet[2972]: W0128 01:31:13.848557 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.848738 kubelet[2972]: E0128 01:31:13.848576 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.849041 kubelet[2972]: E0128 01:31:13.849029 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.849108 kubelet[2972]: W0128 01:31:13.849096 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.849180 kubelet[2972]: E0128 01:31:13.849167 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.849540 kubelet[2972]: E0128 01:31:13.849525 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.849904 kubelet[2972]: W0128 01:31:13.849887 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.849970 kubelet[2972]: E0128 01:31:13.849959 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.850928 kubelet[2972]: E0128 01:31:13.850909 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:13.853155 containerd[1622]: time="2026-01-28T01:31:13.852908502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxbgw,Uid:b1cff35a-4ecc-4216-ba16-440cc319454f,Namespace:calico-system,Attempt:0,}" Jan 28 01:31:13.862204 kubelet[2972]: E0128 01:31:13.862186 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.862386 kubelet[2972]: W0128 01:31:13.862370 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.863340 kubelet[2972]: E0128 01:31:13.862828 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.865509 kubelet[2972]: I0128 01:31:13.865384 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4b5e90d-930c-4b60-ab0a-ec73967e82da-registration-dir\") pod \"csi-node-driver-9gwj5\" (UID: \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\") " pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:13.871730 kubelet[2972]: E0128 01:31:13.871343 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.871730 kubelet[2972]: W0128 01:31:13.871360 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.871730 kubelet[2972]: E0128 01:31:13.871379 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.873735 kubelet[2972]: E0128 01:31:13.873131 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.874564 kubelet[2972]: W0128 01:31:13.873416 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.874564 kubelet[2972]: E0128 01:31:13.874161 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.877836 kubelet[2972]: E0128 01:31:13.877139 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.877836 kubelet[2972]: W0128 01:31:13.877308 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.878410 kubelet[2972]: E0128 01:31:13.878258 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.884170 kubelet[2972]: E0128 01:31:13.883770 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.884170 kubelet[2972]: W0128 01:31:13.883788 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.885244 kubelet[2972]: E0128 01:31:13.884798 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.885244 kubelet[2972]: I0128 01:31:13.884834 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4b5e90d-930c-4b60-ab0a-ec73967e82da-socket-dir\") pod \"csi-node-driver-9gwj5\" (UID: \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\") " pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:13.888003 kubelet[2972]: E0128 01:31:13.886780 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.888003 kubelet[2972]: W0128 01:31:13.886796 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.888565 kubelet[2972]: E0128 01:31:13.888545 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.892585 kubelet[2972]: E0128 01:31:13.892381 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.892585 kubelet[2972]: W0128 01:31:13.892511 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.892585 kubelet[2972]: E0128 01:31:13.892740 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.894722 kubelet[2972]: E0128 01:31:13.894260 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.894722 kubelet[2972]: W0128 01:31:13.894325 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.894722 kubelet[2972]: E0128 01:31:13.894406 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.894849 kubelet[2972]: E0128 01:31:13.894811 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.894849 kubelet[2972]: W0128 01:31:13.894823 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.894941 kubelet[2972]: E0128 01:31:13.894924 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.896050 kubelet[2972]: E0128 01:31:13.896033 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.896256 kubelet[2972]: W0128 01:31:13.896239 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.897742 kubelet[2972]: E0128 01:31:13.896663 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.897742 kubelet[2972]: I0128 01:31:13.896745 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b4b5e90d-930c-4b60-ab0a-ec73967e82da-varrun\") pod \"csi-node-driver-9gwj5\" (UID: \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\") " pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:13.903767 kubelet[2972]: E0128 01:31:13.903579 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.904715 kubelet[2972]: W0128 01:31:13.903962 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.904715 kubelet[2972]: E0128 01:31:13.904079 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.906435 kubelet[2972]: E0128 01:31:13.906101 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.906435 kubelet[2972]: W0128 01:31:13.906116 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.906435 kubelet[2972]: E0128 01:31:13.906137 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.910650 kubelet[2972]: E0128 01:31:13.908085 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.910650 kubelet[2972]: W0128 01:31:13.908101 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.925199 kubelet[2972]: E0128 01:31:13.920838 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.936877 kubelet[2972]: E0128 01:31:13.936551 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.936877 kubelet[2972]: W0128 01:31:13.936573 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.936877 kubelet[2972]: E0128 01:31:13.936590 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.936877 kubelet[2972]: I0128 01:31:13.936742 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4b5e90d-930c-4b60-ab0a-ec73967e82da-kubelet-dir\") pod \"csi-node-driver-9gwj5\" (UID: \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\") " pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:13.939257 kubelet[2972]: E0128 01:31:13.939237 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.942758 kubelet[2972]: W0128 01:31:13.939347 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.943235 kubelet[2972]: E0128 01:31:13.943215 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:13.944230 kubelet[2972]: E0128 01:31:13.944212 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:13.944320 kubelet[2972]: W0128 01:31:13.944304 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:13.944566 kubelet[2972]: E0128 01:31:13.944542 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.061311 kubelet[2972]: E0128 01:31:14.061278 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.104411 kubelet[2972]: W0128 01:31:14.067913 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.104411 kubelet[2972]: E0128 01:31:14.067951 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.104411 kubelet[2972]: E0128 01:31:14.069571 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.104411 kubelet[2972]: W0128 01:31:14.069586 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.104411 kubelet[2972]: E0128 01:31:14.069714 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.114221 kubelet[2972]: E0128 01:31:14.108070 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.114221 kubelet[2972]: W0128 01:31:14.108092 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.114221 kubelet[2972]: E0128 01:31:14.108115 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.114221 kubelet[2972]: I0128 01:31:14.108154 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp26f\" (UniqueName: \"kubernetes.io/projected/b4b5e90d-930c-4b60-ab0a-ec73967e82da-kube-api-access-tp26f\") pod \"csi-node-driver-9gwj5\" (UID: \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\") " pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:14.114221 kubelet[2972]: E0128 01:31:14.108958 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.114221 kubelet[2972]: W0128 01:31:14.108973 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.114221 kubelet[2972]: E0128 01:31:14.109080 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.114221 kubelet[2972]: E0128 01:31:14.111045 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.114221 kubelet[2972]: W0128 01:31:14.111076 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.111442 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.112524 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.115137 kubelet[2972]: W0128 01:31:14.112543 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.112785 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.113582 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.115137 kubelet[2972]: W0128 01:31:14.113686 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.113737 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.114246 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.115137 kubelet[2972]: W0128 01:31:14.114260 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.115137 kubelet[2972]: E0128 01:31:14.114935 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.115222 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.123587 kubelet[2972]: W0128 01:31:14.115236 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.115282 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.116879 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.123587 kubelet[2972]: W0128 01:31:14.116893 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.117216 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.117728 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.123587 kubelet[2972]: W0128 01:31:14.117740 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.117850 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.123587 kubelet[2972]: E0128 01:31:14.118033 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.124259 containerd[1622]: time="2026-01-28T01:31:14.115364643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ff799b947-rjdp5,Uid:9388869f-bdfb-4dba-b743-7efcf5a11c09,Namespace:calico-system,Attempt:0,} returns sandbox id \"3765c290f3225c9a1324080c21d0fbe513a87c682c6d88a1564dbd40a6dcac67\"" Jan 28 01:31:14.124259 containerd[1622]: time="2026-01-28T01:31:14.121960444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:31:14.124369 kubelet[2972]: W0128 01:31:14.118046 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.124369 kubelet[2972]: E0128 01:31:14.118129 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.124369 kubelet[2972]: E0128 01:31:14.118920 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:14.124369 kubelet[2972]: E0128 01:31:14.124160 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.124369 kubelet[2972]: W0128 01:31:14.124179 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.124835 kubelet[2972]: E0128 01:31:14.124387 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.127006 kubelet[2972]: E0128 01:31:14.126911 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.127006 kubelet[2972]: W0128 01:31:14.126982 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.127541 kubelet[2972]: E0128 01:31:14.127451 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.129445 kubelet[2972]: E0128 01:31:14.129426 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.129834 kubelet[2972]: W0128 01:31:14.129744 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.131702 kubelet[2972]: E0128 01:31:14.129987 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.137443 kubelet[2972]: E0128 01:31:14.137136 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.137443 kubelet[2972]: W0128 01:31:14.137293 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.139440 kubelet[2972]: E0128 01:31:14.139243 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.147986 kubelet[2972]: E0128 01:31:14.147791 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.148737 kubelet[2972]: W0128 01:31:14.148223 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.148881 kubelet[2972]: E0128 01:31:14.148841 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.152435 kubelet[2972]: E0128 01:31:14.152413 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.152810 kubelet[2972]: W0128 01:31:14.152586 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.153172 kubelet[2972]: E0128 01:31:14.153136 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.162772 kubelet[2972]: E0128 01:31:14.162745 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.162866 kubelet[2972]: W0128 01:31:14.162848 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.163202 kubelet[2972]: E0128 01:31:14.163182 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.163762 kubelet[2972]: E0128 01:31:14.163746 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.163864 kubelet[2972]: W0128 01:31:14.163846 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.164087 kubelet[2972]: E0128 01:31:14.164068 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.167100 kubelet[2972]: E0128 01:31:14.166877 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.167100 kubelet[2972]: W0128 01:31:14.166893 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.167100 kubelet[2972]: E0128 01:31:14.166943 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.169844 kubelet[2972]: E0128 01:31:14.169113 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.169844 kubelet[2972]: W0128 01:31:14.169169 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.169844 kubelet[2972]: E0128 01:31:14.169386 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.171546 kubelet[2972]: E0128 01:31:14.171409 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.171546 kubelet[2972]: W0128 01:31:14.171427 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.171546 kubelet[2972]: E0128 01:31:14.171444 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.226555 kubelet[2972]: E0128 01:31:14.226526 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.229355 kubelet[2972]: W0128 01:31:14.229028 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.229355 kubelet[2972]: E0128 01:31:14.229063 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.232800 kubelet[2972]: E0128 01:31:14.232446 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.232800 kubelet[2972]: W0128 01:31:14.232520 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.232800 kubelet[2972]: E0128 01:31:14.232541 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.233925 kubelet[2972]: E0128 01:31:14.233908 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.234034 kubelet[2972]: W0128 01:31:14.234015 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.234122 kubelet[2972]: E0128 01:31:14.234105 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.234696 kubelet[2972]: E0128 01:31:14.234679 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.234773 kubelet[2972]: W0128 01:31:14.234756 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.235081 kubelet[2972]: E0128 01:31:14.234852 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.237939 kubelet[2972]: E0128 01:31:14.237922 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.241081 kubelet[2972]: W0128 01:31:14.240697 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.241081 kubelet[2972]: E0128 01:31:14.240731 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.284432 containerd[1622]: time="2026-01-28T01:31:14.283546218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:31:14.284432 containerd[1622]: time="2026-01-28T01:31:14.283717480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:31:14.284432 containerd[1622]: time="2026-01-28T01:31:14.283763687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:31:14.284432 containerd[1622]: time="2026-01-28T01:31:14.283943816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:31:14.287726 kubelet[2972]: E0128 01:31:14.286541 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:14.287726 kubelet[2972]: W0128 01:31:14.286571 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:14.287726 kubelet[2972]: E0128 01:31:14.286699 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:14.830132 containerd[1622]: time="2026-01-28T01:31:14.818436159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxbgw,Uid:b1cff35a-4ecc-4216-ba16-440cc319454f,Namespace:calico-system,Attempt:0,} returns sandbox id \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\"" Jan 28 01:31:14.865359 kubelet[2972]: E0128 01:31:14.865318 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:15.583895 kubelet[2972]: E0128 01:31:15.578421 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:17.337521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761611326.mount: Deactivated successfully. Jan 28 01:31:17.555957 kubelet[2972]: E0128 01:31:17.555901 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:19.555531 kubelet[2972]: E0128 01:31:19.555425 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:21.580061 kubelet[2972]: E0128 01:31:21.576972 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:23.608181 kubelet[2972]: E0128 01:31:23.607920 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:24.134519 containerd[1622]: time="2026-01-28T01:31:24.133192521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:24.159115 containerd[1622]: time="2026-01-28T01:31:24.158548544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 28 01:31:24.163486 containerd[1622]: time="2026-01-28T01:31:24.161768354Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:24.178958 containerd[1622]: time="2026-01-28T01:31:24.178882717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:24.182437 containerd[1622]: time="2026-01-28T01:31:24.180799403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 10.058796208s" Jan 28 01:31:24.182437 containerd[1622]: time="2026-01-28T01:31:24.180965907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:31:24.191083 containerd[1622]: time="2026-01-28T01:31:24.190993969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:31:24.260669 containerd[1622]: time="2026-01-28T01:31:24.260280597Z" level=info msg="CreateContainer within sandbox \"3765c290f3225c9a1324080c21d0fbe513a87c682c6d88a1564dbd40a6dcac67\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:31:24.376208 containerd[1622]: time="2026-01-28T01:31:24.375489296Z" level=info msg="CreateContainer within sandbox \"3765c290f3225c9a1324080c21d0fbe513a87c682c6d88a1564dbd40a6dcac67\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2f96297222d9da07ac427df9b52aee16bd2e89db3119080a68e695bac22a7156\"" Jan 28 01:31:24.381500 containerd[1622]: time="2026-01-28T01:31:24.378573040Z" level=info msg="StartContainer for \"2f96297222d9da07ac427df9b52aee16bd2e89db3119080a68e695bac22a7156\"" Jan 28 01:31:24.556158 kubelet[2972]: E0128 01:31:24.556052 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:24.658773 kubelet[2972]: E0128 01:31:24.658582 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.666869 kubelet[2972]: W0128 01:31:24.659412 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.666869 kubelet[2972]: E0128 01:31:24.659452 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.677366 kubelet[2972]: E0128 01:31:24.677025 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.677366 kubelet[2972]: W0128 01:31:24.677059 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.677366 kubelet[2972]: E0128 01:31:24.677087 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.678571 kubelet[2972]: E0128 01:31:24.678552 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.710571 kubelet[2972]: W0128 01:31:24.706252 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.710571 kubelet[2972]: E0128 01:31:24.706299 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.735865 kubelet[2972]: E0128 01:31:24.719064 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.735865 kubelet[2972]: W0128 01:31:24.719093 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.735865 kubelet[2972]: E0128 01:31:24.719123 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.735865 kubelet[2972]: E0128 01:31:24.721500 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.735865 kubelet[2972]: W0128 01:31:24.721516 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.735865 kubelet[2972]: E0128 01:31:24.721537 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.735865 kubelet[2972]: E0128 01:31:24.728547 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.735865 kubelet[2972]: W0128 01:31:24.728569 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.759580 kubelet[2972]: E0128 01:31:24.728816 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.776984 kubelet[2972]: E0128 01:31:24.774466 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.776984 kubelet[2972]: W0128 01:31:24.775137 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.776984 kubelet[2972]: E0128 01:31:24.775174 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.798320 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.806077 kubelet[2972]: W0128 01:31:24.798402 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.798436 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.799155 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.806077 kubelet[2972]: W0128 01:31:24.799168 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.799189 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.802886 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.806077 kubelet[2972]: W0128 01:31:24.802903 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.802921 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.806077 kubelet[2972]: E0128 01:31:24.803228 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811368 kubelet[2972]: W0128 01:31:24.803240 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811368 kubelet[2972]: E0128 01:31:24.803254 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.811368 kubelet[2972]: E0128 01:31:24.805087 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811368 kubelet[2972]: W0128 01:31:24.805099 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811368 kubelet[2972]: E0128 01:31:24.805114 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.811368 kubelet[2972]: E0128 01:31:24.806928 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811368 kubelet[2972]: W0128 01:31:24.806941 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811368 kubelet[2972]: E0128 01:31:24.806955 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.811368 kubelet[2972]: E0128 01:31:24.807217 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811368 kubelet[2972]: W0128 01:31:24.807228 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.807240 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.807482 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811913 kubelet[2972]: W0128 01:31:24.807492 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.807504 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.807950 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811913 kubelet[2972]: W0128 01:31:24.807962 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.807974 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.808256 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.811913 kubelet[2972]: W0128 01:31:24.808268 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.811913 kubelet[2972]: E0128 01:31:24.808283 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.821431 kubelet[2972]: E0128 01:31:24.821247 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.821431 kubelet[2972]: W0128 01:31:24.821274 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.821431 kubelet[2972]: E0128 01:31:24.821298 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.831181 kubelet[2972]: E0128 01:31:24.831021 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.831181 kubelet[2972]: W0128 01:31:24.831043 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.831181 kubelet[2972]: E0128 01:31:24.831069 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.836800 kubelet[2972]: E0128 01:31:24.835331 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.836800 kubelet[2972]: W0128 01:31:24.835350 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.836800 kubelet[2972]: E0128 01:31:24.835373 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.836800 kubelet[2972]: E0128 01:31:24.835931 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.836800 kubelet[2972]: W0128 01:31:24.835944 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.836800 kubelet[2972]: E0128 01:31:24.835959 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.848130 kubelet[2972]: E0128 01:31:24.847445 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.848130 kubelet[2972]: W0128 01:31:24.847469 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.848130 kubelet[2972]: E0128 01:31:24.847490 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.875660 kubelet[2972]: E0128 01:31:24.875511 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.876127 kubelet[2972]: W0128 01:31:24.875892 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.876127 kubelet[2972]: E0128 01:31:24.875930 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.920552 kubelet[2972]: E0128 01:31:24.920513 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.921202 kubelet[2972]: W0128 01:31:24.920907 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.921202 kubelet[2972]: E0128 01:31:24.920948 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:24.929991 kubelet[2972]: E0128 01:31:24.927050 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:24.948459 kubelet[2972]: W0128 01:31:24.929831 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:24.950210 kubelet[2972]: E0128 01:31:24.936815 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:25.602587 kubelet[2972]: E0128 01:31:25.580358 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:25.912723 containerd[1622]: time="2026-01-28T01:31:25.912333272Z" level=info msg="StartContainer for \"2f96297222d9da07ac427df9b52aee16bd2e89db3119080a68e695bac22a7156\" returns successfully" Jan 28 01:31:27.098380 kubelet[2972]: E0128 01:31:27.098312 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:27.199891 kubelet[2972]: E0128 01:31:27.196031 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.199891 kubelet[2972]: W0128 01:31:27.196187 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.199891 kubelet[2972]: E0128 01:31:27.196226 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.216066 kubelet[2972]: E0128 01:31:27.215738 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.216066 kubelet[2972]: W0128 01:31:27.215823 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.216066 kubelet[2972]: E0128 01:31:27.215855 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.232685 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.263190 kubelet[2972]: W0128 01:31:27.232820 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.232849 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.236655 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.263190 kubelet[2972]: W0128 01:31:27.236673 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.236805 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.237453 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.263190 kubelet[2972]: W0128 01:31:27.237468 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.237489 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.263190 kubelet[2972]: E0128 01:31:27.244369 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.272838 kubelet[2972]: W0128 01:31:27.244390 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.272838 kubelet[2972]: E0128 01:31:27.244415 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.291916 kubelet[2972]: E0128 01:31:27.290930 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.291916 kubelet[2972]: W0128 01:31:27.291078 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.294063 kubelet[2972]: E0128 01:31:27.291510 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.352017 kubelet[2972]: E0128 01:31:27.327291 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.352017 kubelet[2972]: W0128 01:31:27.330817 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.352017 kubelet[2972]: E0128 01:31:27.330853 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.378263 kubelet[2972]: E0128 01:31:27.370379 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.378263 kubelet[2972]: W0128 01:31:27.370422 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.378263 kubelet[2972]: E0128 01:31:27.370460 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.378263 kubelet[2972]: E0128 01:31:27.438419 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.378263 kubelet[2972]: W0128 01:31:27.482261 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.378263 kubelet[2972]: E0128 01:31:27.482573 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.530197 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.565171 kubelet[2972]: W0128 01:31:27.530254 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.530288 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.531228 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.565171 kubelet[2972]: W0128 01:31:27.531246 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.531269 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.537725 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.565171 kubelet[2972]: W0128 01:31:27.537746 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.537809 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.565171 kubelet[2972]: E0128 01:31:27.545813 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.574664 kubelet[2972]: W0128 01:31:27.545838 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.574664 kubelet[2972]: E0128 01:31:27.545863 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.574664 kubelet[2972]: E0128 01:31:27.549035 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.574664 kubelet[2972]: W0128 01:31:27.549054 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.574664 kubelet[2972]: E0128 01:31:27.549077 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.574664 kubelet[2972]: E0128 01:31:27.556193 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.574664 kubelet[2972]: W0128 01:31:27.556219 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.574664 kubelet[2972]: E0128 01:31:27.556244 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.582009 kubelet[2972]: E0128 01:31:27.578033 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:27.583216 kubelet[2972]: I0128 01:31:27.583164 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5ff799b947-rjdp5" podStartSLOduration=5.5153113860000005 podStartE2EDuration="15.583148304s" podCreationTimestamp="2026-01-28 01:31:12 +0000 UTC" firstStartedPulling="2026-01-28 01:31:14.121313046 +0000 UTC m=+77.085001311" lastFinishedPulling="2026-01-28 01:31:24.189149944 +0000 UTC m=+87.152838229" observedRunningTime="2026-01-28 01:31:27.583012611 +0000 UTC m=+90.546700877" watchObservedRunningTime="2026-01-28 01:31:27.583148304 +0000 UTC m=+90.546836570" Jan 28 01:31:27.583697 kubelet[2972]: E0128 01:31:27.583649 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.609764 kubelet[2972]: W0128 01:31:27.583694 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.609764 kubelet[2972]: E0128 01:31:27.583716 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.609764 kubelet[2972]: E0128 01:31:27.585927 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.609764 kubelet[2972]: W0128 01:31:27.585944 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.609764 kubelet[2972]: E0128 01:31:27.586810 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.609764 kubelet[2972]: E0128 01:31:27.588963 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.609764 kubelet[2972]: W0128 01:31:27.588977 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.609764 kubelet[2972]: E0128 01:31:27.589897 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.609764 kubelet[2972]: E0128 01:31:27.589992 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.609764 kubelet[2972]: W0128 01:31:27.590006 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.590858 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.598052 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.610345 kubelet[2972]: W0128 01:31:27.598075 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.598892 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.602700 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.610345 kubelet[2972]: W0128 01:31:27.602717 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.603289 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.606009 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.610345 kubelet[2972]: W0128 01:31:27.606024 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.610345 kubelet[2972]: E0128 01:31:27.607822 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.634213 kubelet[2972]: E0128 01:31:27.611400 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.634213 kubelet[2972]: W0128 01:31:27.611421 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.634213 kubelet[2972]: E0128 01:31:27.611961 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.634213 kubelet[2972]: E0128 01:31:27.625241 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.634213 kubelet[2972]: W0128 01:31:27.625870 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.634213 kubelet[2972]: E0128 01:31:27.631851 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.661100 kubelet[2972]: E0128 01:31:27.651487 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.661100 kubelet[2972]: W0128 01:31:27.651520 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.661100 kubelet[2972]: E0128 01:31:27.652567 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.675727 kubelet[2972]: E0128 01:31:27.668185 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.675727 kubelet[2972]: W0128 01:31:27.668245 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.675727 kubelet[2972]: E0128 01:31:27.672922 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.675727 kubelet[2972]: W0128 01:31:27.672939 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.675727 kubelet[2972]: E0128 01:31:27.673215 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.675727 kubelet[2972]: W0128 01:31:27.673225 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.675727 kubelet[2972]: E0128 01:31:27.673241 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.689808 kubelet[2972]: E0128 01:31:27.682156 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.689808 kubelet[2972]: E0128 01:31:27.682188 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.689808 kubelet[2972]: E0128 01:31:27.682580 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.689808 kubelet[2972]: W0128 01:31:27.682656 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.689808 kubelet[2972]: E0128 01:31:27.682730 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.690053 kubelet[2972]: E0128 01:31:27.690029 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.690053 kubelet[2972]: W0128 01:31:27.690048 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.691163 kubelet[2972]: E0128 01:31:27.690064 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.691163 kubelet[2972]: E0128 01:31:27.690456 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.691163 kubelet[2972]: W0128 01:31:27.690466 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.691163 kubelet[2972]: E0128 01:31:27.690477 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:27.883108 kubelet[2972]: E0128 01:31:27.873141 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:27.883108 kubelet[2972]: W0128 01:31:27.873190 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:27.883108 kubelet[2972]: E0128 01:31:27.873214 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.158703 kubelet[2972]: E0128 01:31:28.142964 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:28.268461 kubelet[2972]: E0128 01:31:28.264698 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.268461 kubelet[2972]: W0128 01:31:28.268175 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.268461 kubelet[2972]: E0128 01:31:28.268313 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.280891 kubelet[2972]: E0128 01:31:28.278585 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.280891 kubelet[2972]: W0128 01:31:28.278679 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.280891 kubelet[2972]: E0128 01:31:28.278716 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.290089 kubelet[2972]: E0128 01:31:28.282136 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.290089 kubelet[2972]: W0128 01:31:28.282155 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.290089 kubelet[2972]: E0128 01:31:28.282177 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.337256 kubelet[2972]: E0128 01:31:28.336089 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.337256 kubelet[2972]: W0128 01:31:28.336126 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.337256 kubelet[2972]: E0128 01:31:28.336156 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.337256 kubelet[2972]: E0128 01:31:28.337000 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.337256 kubelet[2972]: W0128 01:31:28.337024 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.337256 kubelet[2972]: E0128 01:31:28.337053 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.367007 kubelet[2972]: E0128 01:31:28.360843 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.367007 kubelet[2972]: W0128 01:31:28.360869 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.367007 kubelet[2972]: E0128 01:31:28.360897 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.384468 kubelet[2972]: E0128 01:31:28.384211 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.384468 kubelet[2972]: W0128 01:31:28.384336 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.391552 kubelet[2972]: E0128 01:31:28.384371 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.396653 kubelet[2972]: E0128 01:31:28.396443 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.396653 kubelet[2972]: W0128 01:31:28.396543 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.403095 kubelet[2972]: E0128 01:31:28.396780 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.407122 kubelet[2972]: E0128 01:31:28.405752 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.407122 kubelet[2972]: W0128 01:31:28.405830 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.407122 kubelet[2972]: E0128 01:31:28.405867 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.431055 kubelet[2972]: E0128 01:31:28.421204 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.431055 kubelet[2972]: W0128 01:31:28.421226 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.431055 kubelet[2972]: E0128 01:31:28.421251 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.431055 kubelet[2972]: E0128 01:31:28.425191 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.431055 kubelet[2972]: W0128 01:31:28.425223 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.431055 kubelet[2972]: E0128 01:31:28.425252 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.435946 kubelet[2972]: E0128 01:31:28.431326 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.435946 kubelet[2972]: W0128 01:31:28.431561 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.435946 kubelet[2972]: E0128 01:31:28.431702 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.465514 kubelet[2972]: E0128 01:31:28.459060 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.465514 kubelet[2972]: W0128 01:31:28.459121 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.465514 kubelet[2972]: E0128 01:31:28.459152 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.492040 kubelet[2972]: E0128 01:31:28.477099 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.492040 kubelet[2972]: W0128 01:31:28.477134 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.492040 kubelet[2972]: E0128 01:31:28.477164 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.492040 kubelet[2972]: E0128 01:31:28.479151 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.492040 kubelet[2972]: W0128 01:31:28.479180 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.492040 kubelet[2972]: E0128 01:31:28.479216 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.541327 kubelet[2972]: E0128 01:31:28.541291 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.541507 kubelet[2972]: W0128 01:31:28.541488 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.541779 kubelet[2972]: E0128 01:31:28.541576 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.772136 kubelet[2972]: E0128 01:31:28.707703 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.772136 kubelet[2972]: W0128 01:31:28.707735 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.772136 kubelet[2972]: E0128 01:31:28.707769 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.822505 kubelet[2972]: E0128 01:31:28.822479 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.822700 kubelet[2972]: W0128 01:31:28.822681 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.822843 kubelet[2972]: E0128 01:31:28.822785 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.824480 kubelet[2972]: E0128 01:31:28.824037 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.824480 kubelet[2972]: W0128 01:31:28.824052 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.824480 kubelet[2972]: E0128 01:31:28.824071 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.859278 kubelet[2972]: E0128 01:31:28.859250 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.859712 kubelet[2972]: W0128 01:31:28.859436 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.859712 kubelet[2972]: E0128 01:31:28.859467 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:28.868068 kubelet[2972]: E0128 01:31:28.867249 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:28.868068 kubelet[2972]: W0128 01:31:28.867266 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:28.953700 kubelet[2972]: E0128 01:31:28.867285 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.151274 kubelet[2972]: E0128 01:31:29.151157 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.151441 kubelet[2972]: W0128 01:31:29.151421 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.151518 kubelet[2972]: E0128 01:31:29.151503 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.165049 kubelet[2972]: E0128 01:31:29.164960 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.183148 kubelet[2972]: W0128 01:31:29.181780 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.197221 kubelet[2972]: E0128 01:31:29.197095 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.277023 kubelet[2972]: E0128 01:31:29.270539 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.277023 kubelet[2972]: W0128 01:31:29.270570 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.277285 kubelet[2972]: E0128 01:31:29.277264 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.298001 kubelet[2972]: E0128 01:31:29.297178 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.298001 kubelet[2972]: W0128 01:31:29.297204 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.298001 kubelet[2972]: E0128 01:31:29.297377 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.302445 kubelet[2972]: E0128 01:31:29.302322 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.302445 kubelet[2972]: W0128 01:31:29.302418 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.327988 kubelet[2972]: E0128 01:31:29.302669 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.411084 kubelet[2972]: E0128 01:31:29.410036 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.411084 kubelet[2972]: W0128 01:31:29.410068 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.411084 kubelet[2972]: E0128 01:31:29.410105 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.414924 kubelet[2972]: E0128 01:31:29.414082 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:29.414924 kubelet[2972]: E0128 01:31:29.414849 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:29.418377 kubelet[2972]: E0128 01:31:29.418355 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:29.422514 kubelet[2972]: E0128 01:31:29.420688 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.422705 kubelet[2972]: W0128 01:31:29.422685 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.425478 kubelet[2972]: E0128 01:31:29.422777 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.426165 kubelet[2972]: E0128 01:31:29.426071 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.426301 kubelet[2972]: W0128 01:31:29.426287 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.426449 kubelet[2972]: E0128 01:31:29.426435 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.427839 kubelet[2972]: E0128 01:31:29.427761 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.427955 kubelet[2972]: W0128 01:31:29.427939 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.432116 kubelet[2972]: E0128 01:31:29.432098 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.436134 kubelet[2972]: E0128 01:31:29.436119 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.436467 kubelet[2972]: W0128 01:31:29.436192 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.436467 kubelet[2972]: E0128 01:31:29.436218 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.439386 kubelet[2972]: E0128 01:31:29.439175 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.439386 kubelet[2972]: W0128 01:31:29.439195 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.439386 kubelet[2972]: E0128 01:31:29.439350 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.467574 kubelet[2972]: E0128 01:31:29.467504 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.467574 kubelet[2972]: W0128 01:31:29.467529 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.467574 kubelet[2972]: E0128 01:31:29.467552 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.582527 kubelet[2972]: E0128 01:31:29.571434 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.582527 kubelet[2972]: W0128 01:31:29.571490 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.582527 kubelet[2972]: E0128 01:31:29.571771 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.582527 kubelet[2972]: E0128 01:31:29.576257 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.582527 kubelet[2972]: W0128 01:31:29.576273 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.582527 kubelet[2972]: E0128 01:31:29.576293 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.635410 kubelet[2972]: E0128 01:31:29.596115 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.635410 kubelet[2972]: W0128 01:31:29.596158 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.635410 kubelet[2972]: E0128 01:31:29.596182 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.635410 kubelet[2972]: E0128 01:31:29.607122 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.635410 kubelet[2972]: W0128 01:31:29.607136 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.635410 kubelet[2972]: E0128 01:31:29.607156 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.635410 kubelet[2972]: E0128 01:31:29.607472 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.635410 kubelet[2972]: W0128 01:31:29.607483 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.635410 kubelet[2972]: E0128 01:31:29.607500 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.683429 kubelet[2972]: E0128 01:31:29.650507 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.683429 kubelet[2972]: W0128 01:31:29.650692 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.683429 kubelet[2972]: E0128 01:31:29.650721 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.726929 kubelet[2972]: E0128 01:31:29.725190 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.726929 kubelet[2972]: W0128 01:31:29.725211 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.726929 kubelet[2972]: E0128 01:31:29.725234 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.744045 kubelet[2972]: E0128 01:31:29.741199 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.744045 kubelet[2972]: W0128 01:31:29.741264 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.744045 kubelet[2972]: E0128 01:31:29.741296 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.775342 kubelet[2972]: E0128 01:31:29.772717 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.775342 kubelet[2972]: W0128 01:31:29.772765 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.775342 kubelet[2972]: E0128 01:31:29.772795 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.787693 kubelet[2972]: E0128 01:31:29.786589 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.787693 kubelet[2972]: W0128 01:31:29.786693 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.787693 kubelet[2972]: E0128 01:31:29.786719 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.801887 kubelet[2972]: E0128 01:31:29.793458 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.801887 kubelet[2972]: W0128 01:31:29.793475 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.801887 kubelet[2972]: E0128 01:31:29.793497 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.801887 kubelet[2972]: E0128 01:31:29.800152 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.801887 kubelet[2972]: W0128 01:31:29.800168 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.801887 kubelet[2972]: E0128 01:31:29.800189 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.801887 kubelet[2972]: E0128 01:31:29.800937 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.801887 kubelet[2972]: W0128 01:31:29.800949 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.801887 kubelet[2972]: E0128 01:31:29.800963 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.817372 kubelet[2972]: E0128 01:31:29.816988 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.817372 kubelet[2972]: W0128 01:31:29.817009 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.817372 kubelet[2972]: E0128 01:31:29.817026 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.820382 kubelet[2972]: E0128 01:31:29.820186 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.820382 kubelet[2972]: W0128 01:31:29.820206 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.820544 kubelet[2972]: E0128 01:31:29.820526 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.852330 kubelet[2972]: E0128 01:31:29.852301 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.854934 kubelet[2972]: W0128 01:31:29.852516 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.854934 kubelet[2972]: E0128 01:31:29.852654 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.854934 kubelet[2972]: E0128 01:31:29.853893 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.854934 kubelet[2972]: W0128 01:31:29.853930 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.854934 kubelet[2972]: E0128 01:31:29.853967 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.871509 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.885409 kubelet[2972]: W0128 01:31:29.871528 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.871545 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.871922 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.885409 kubelet[2972]: W0128 01:31:29.871936 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.871949 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.872211 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.885409 kubelet[2972]: W0128 01:31:29.872220 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.872231 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.885409 kubelet[2972]: E0128 01:31:29.885138 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.885809 kubelet[2972]: W0128 01:31:29.885152 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.885809 kubelet[2972]: E0128 01:31:29.885218 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.898405 kubelet[2972]: E0128 01:31:29.898385 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.898702 kubelet[2972]: W0128 01:31:29.898469 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.898702 kubelet[2972]: E0128 01:31:29.898493 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.899290 kubelet[2972]: E0128 01:31:29.899273 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.899520 kubelet[2972]: W0128 01:31:29.899348 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.899520 kubelet[2972]: E0128 01:31:29.899435 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.907173 kubelet[2972]: E0128 01:31:29.907155 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.907308 kubelet[2972]: W0128 01:31:29.907249 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.907545 kubelet[2972]: E0128 01:31:29.907450 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.915013 kubelet[2972]: E0128 01:31:29.911717 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.915013 kubelet[2972]: W0128 01:31:29.911750 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.915013 kubelet[2972]: E0128 01:31:29.914588 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.920517 kubelet[2972]: E0128 01:31:29.920130 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.920517 kubelet[2972]: W0128 01:31:29.920167 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.926343 kubelet[2972]: E0128 01:31:29.926116 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.926693 kubelet[2972]: E0128 01:31:29.926248 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.926693 kubelet[2972]: W0128 01:31:29.926448 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.926693 kubelet[2972]: E0128 01:31:29.926658 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.931541 kubelet[2972]: E0128 01:31:29.931524 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.931855 kubelet[2972]: W0128 01:31:29.931675 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.931855 kubelet[2972]: E0128 01:31:29.931697 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.941223 kubelet[2972]: E0128 01:31:29.939949 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.941223 kubelet[2972]: W0128 01:31:29.939964 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.941223 kubelet[2972]: E0128 01:31:29.940903 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.944931 kubelet[2972]: E0128 01:31:29.944717 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.944931 kubelet[2972]: W0128 01:31:29.944731 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.945449 kubelet[2972]: E0128 01:31:29.945082 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.946353 kubelet[2972]: E0128 01:31:29.946174 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.946353 kubelet[2972]: W0128 01:31:29.946187 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.946693 kubelet[2972]: E0128 01:31:29.946661 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.949499 kubelet[2972]: E0128 01:31:29.949485 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.957231 kubelet[2972]: W0128 01:31:29.949680 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.957231 kubelet[2972]: E0128 01:31:29.949746 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.959549 kubelet[2972]: E0128 01:31:29.959387 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.959549 kubelet[2972]: W0128 01:31:29.959454 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.959792 kubelet[2972]: E0128 01:31:29.959764 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.962489 kubelet[2972]: E0128 01:31:29.962475 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.962740 kubelet[2972]: W0128 01:31:29.962547 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.962819 kubelet[2972]: E0128 01:31:29.962804 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.978939 containerd[1622]: time="2026-01-28T01:31:29.966534322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:29.979410 kubelet[2972]: E0128 01:31:29.971211 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.979410 kubelet[2972]: W0128 01:31:29.971224 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.979410 kubelet[2972]: E0128 01:31:29.971467 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.979410 kubelet[2972]: E0128 01:31:29.971752 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.979410 kubelet[2972]: W0128 01:31:29.971764 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.979410 kubelet[2972]: E0128 01:31:29.979038 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:29.980817 kubelet[2972]: E0128 01:31:29.980767 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:29.980817 kubelet[2972]: W0128 01:31:29.980784 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:29.980817 kubelet[2972]: E0128 01:31:29.990476 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.003330 containerd[1622]: time="2026-01-28T01:31:30.003011203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 28 01:31:30.005579 kubelet[2972]: E0128 01:31:30.005296 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.005579 kubelet[2972]: W0128 01:31:30.005449 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.006703 kubelet[2972]: E0128 01:31:30.005906 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.008695 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.020255 kubelet[2972]: W0128 01:31:30.008709 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.008821 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.011941 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.020255 kubelet[2972]: W0128 01:31:30.011959 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.011978 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.018137 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.020255 kubelet[2972]: W0128 01:31:30.018155 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.018179 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.020255 kubelet[2972]: E0128 01:31:30.018452 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.026678 kubelet[2972]: W0128 01:31:30.018462 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.026678 kubelet[2972]: E0128 01:31:30.018475 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.026678 kubelet[2972]: E0128 01:31:30.018790 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.026678 kubelet[2972]: W0128 01:31:30.018801 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.026678 kubelet[2972]: E0128 01:31:30.018812 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.027178 kubelet[2972]: E0128 01:31:30.027159 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.027366 kubelet[2972]: W0128 01:31:30.027241 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.027444 kubelet[2972]: E0128 01:31:30.027427 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.028522 kubelet[2972]: E0128 01:31:30.028505 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.028805 kubelet[2972]: W0128 01:31:30.028584 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.028805 kubelet[2972]: E0128 01:31:30.028676 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.030113 kubelet[2972]: E0128 01:31:30.029925 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.030113 kubelet[2972]: W0128 01:31:30.029940 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.030113 kubelet[2972]: E0128 01:31:30.029954 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.032450 kubelet[2972]: E0128 01:31:30.032435 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.032743 kubelet[2972]: W0128 01:31:30.032508 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.032743 kubelet[2972]: E0128 01:31:30.032528 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.033146 kubelet[2972]: E0128 01:31:30.033132 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.033214 kubelet[2972]: W0128 01:31:30.033203 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.033271 kubelet[2972]: E0128 01:31:30.033259 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.033786 kubelet[2972]: E0128 01:31:30.033561 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.033786 kubelet[2972]: W0128 01:31:30.033573 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.033786 kubelet[2972]: E0128 01:31:30.033585 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.035366 kubelet[2972]: E0128 01:31:30.035172 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.035366 kubelet[2972]: W0128 01:31:30.035187 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.035366 kubelet[2972]: E0128 01:31:30.035201 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.037569 kubelet[2972]: E0128 01:31:30.037402 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.037569 kubelet[2972]: W0128 01:31:30.037417 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.037569 kubelet[2972]: E0128 01:31:30.037430 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.063780 kubelet[2972]: E0128 01:31:30.061856 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.063780 kubelet[2972]: W0128 01:31:30.062173 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.122946 kubelet[2972]: E0128 01:31:30.062559 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.137158 kubelet[2972]: E0128 01:31:30.123740 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.137158 kubelet[2972]: W0128 01:31:30.131469 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.137158 kubelet[2972]: E0128 01:31:30.132154 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.137158 kubelet[2972]: W0128 01:31:30.132167 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.137158 kubelet[2972]: E0128 01:31:30.132708 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.137158 kubelet[2972]: W0128 01:31:30.132719 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.137158 kubelet[2972]: E0128 01:31:30.135947 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.137158 kubelet[2972]: E0128 01:31:30.136469 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.137158 kubelet[2972]: W0128 01:31:30.136531 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.137158 kubelet[2972]: E0128 01:31:30.136548 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.146412 kubelet[2972]: E0128 01:31:30.143269 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.146412 kubelet[2972]: E0128 01:31:30.143477 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.146500 containerd[1622]: time="2026-01-28T01:31:30.131325275Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:30.152361 kubelet[2972]: E0128 01:31:30.143739 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.153000 kubelet[2972]: W0128 01:31:30.152436 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.155224 kubelet[2972]: E0128 01:31:30.155200 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.159384 kubelet[2972]: E0128 01:31:30.156188 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.159384 kubelet[2972]: W0128 01:31:30.156304 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.159384 kubelet[2972]: E0128 01:31:30.156345 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.164067 kubelet[2972]: E0128 01:31:30.164018 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.164067 kubelet[2972]: W0128 01:31:30.164060 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.164333 kubelet[2972]: E0128 01:31:30.164141 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.166938 kubelet[2972]: E0128 01:31:30.164702 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.166938 kubelet[2972]: W0128 01:31:30.164747 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.168981 kubelet[2972]: E0128 01:31:30.167218 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.168981 kubelet[2972]: E0128 01:31:30.167316 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.168981 kubelet[2972]: W0128 01:31:30.167326 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.168981 kubelet[2972]: E0128 01:31:30.167340 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.168981 kubelet[2972]: E0128 01:31:30.167685 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.168981 kubelet[2972]: W0128 01:31:30.167698 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.168981 kubelet[2972]: E0128 01:31:30.167742 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.173970 kubelet[2972]: E0128 01:31:30.173189 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.173970 kubelet[2972]: W0128 01:31:30.173229 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.173970 kubelet[2972]: E0128 01:31:30.173252 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.177154 kubelet[2972]: E0128 01:31:30.177137 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.177333 kubelet[2972]: W0128 01:31:30.177314 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.177429 kubelet[2972]: E0128 01:31:30.177413 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.180513 kubelet[2972]: E0128 01:31:30.178179 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.180513 kubelet[2972]: W0128 01:31:30.178194 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.180513 kubelet[2972]: E0128 01:31:30.178208 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.188986 containerd[1622]: time="2026-01-28T01:31:30.183791611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:30.191119 containerd[1622]: time="2026-01-28T01:31:30.186494813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 5.995406746s" Jan 28 01:31:30.191970 containerd[1622]: time="2026-01-28T01:31:30.191201962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:31:30.210275 containerd[1622]: time="2026-01-28T01:31:30.210145104Z" level=info msg="CreateContainer within sandbox \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:31:30.515314 containerd[1622]: time="2026-01-28T01:31:30.514971827Z" level=info msg="CreateContainer within sandbox \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"557eadce342c81172dbc1e4b7ad789f5dfec19d36406bded9e38847da64698db\"" Jan 28 01:31:30.528656 containerd[1622]: time="2026-01-28T01:31:30.526322949Z" level=info msg="StartContainer for \"557eadce342c81172dbc1e4b7ad789f5dfec19d36406bded9e38847da64698db\"" Jan 28 01:31:30.534329 kubelet[2972]: E0128 01:31:30.534303 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.635333 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.663429 kubelet[2972]: W0128 01:31:30.635442 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.635470 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.646772 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.663429 kubelet[2972]: W0128 01:31:30.646787 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.646807 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.658968 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.663429 kubelet[2972]: W0128 01:31:30.658982 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.659001 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.663429 kubelet[2972]: E0128 01:31:30.663460 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.664241 kubelet[2972]: W0128 01:31:30.663474 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.664241 kubelet[2972]: E0128 01:31:30.663492 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.677748 kubelet[2972]: E0128 01:31:30.677232 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.677748 kubelet[2972]: W0128 01:31:30.677275 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.677748 kubelet[2972]: E0128 01:31:30.677295 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.685161 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.691209 kubelet[2972]: W0128 01:31:30.685177 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.685194 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.685425 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.691209 kubelet[2972]: W0128 01:31:30.685434 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.685444 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.685734 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.691209 kubelet[2972]: W0128 01:31:30.685744 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.685755 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.691209 kubelet[2972]: E0128 01:31:30.686028 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.691752 kubelet[2972]: W0128 01:31:30.686038 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.691752 kubelet[2972]: E0128 01:31:30.686050 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.691752 kubelet[2972]: E0128 01:31:30.686252 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.691752 kubelet[2972]: W0128 01:31:30.686261 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.691752 kubelet[2972]: E0128 01:31:30.686271 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.691752 kubelet[2972]: E0128 01:31:30.686468 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.691752 kubelet[2972]: W0128 01:31:30.686477 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.691752 kubelet[2972]: E0128 01:31:30.686486 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.693888 kubelet[2972]: E0128 01:31:30.693820 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.694545 kubelet[2972]: W0128 01:31:30.694513 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.695099 kubelet[2972]: E0128 01:31:30.695022 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.699951 kubelet[2972]: E0128 01:31:30.699929 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.700087 kubelet[2972]: W0128 01:31:30.700066 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.700185 kubelet[2972]: E0128 01:31:30.700167 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.700791 kubelet[2972]: E0128 01:31:30.700772 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.700992 kubelet[2972]: W0128 01:31:30.700970 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.701085 kubelet[2972]: E0128 01:31:30.701067 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.701489 kubelet[2972]: E0128 01:31:30.701472 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.701573 kubelet[2972]: W0128 01:31:30.701556 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.701738 kubelet[2972]: E0128 01:31:30.701716 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.714819 kubelet[2972]: E0128 01:31:30.714729 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.714819 kubelet[2972]: W0128 01:31:30.714774 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.714819 kubelet[2972]: E0128 01:31:30.714793 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.718002 kubelet[2972]: E0128 01:31:30.717934 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.718310 kubelet[2972]: W0128 01:31:30.718135 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.718310 kubelet[2972]: E0128 01:31:30.718161 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.723241 kubelet[2972]: E0128 01:31:30.723183 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.723498 kubelet[2972]: W0128 01:31:30.723346 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.725967 kubelet[2972]: E0128 01:31:30.725804 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.739449 kubelet[2972]: E0128 01:31:30.739074 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.739449 kubelet[2972]: W0128 01:31:30.739189 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.743174 kubelet[2972]: E0128 01:31:30.743149 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.753172 kubelet[2972]: E0128 01:31:30.745315 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.753172 kubelet[2972]: W0128 01:31:30.745411 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.753172 kubelet[2972]: E0128 01:31:30.745516 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.756017 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.887286 kubelet[2972]: W0128 01:31:30.756053 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.756212 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.756817 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.887286 kubelet[2972]: W0128 01:31:30.756828 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.756992 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.757544 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.887286 kubelet[2972]: W0128 01:31:30.757554 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.757770 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.887286 kubelet[2972]: E0128 01:31:30.759275 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.902138 kubelet[2972]: W0128 01:31:30.759287 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.902138 kubelet[2972]: E0128 01:31:30.759443 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.902138 kubelet[2972]: E0128 01:31:30.827775 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.902138 kubelet[2972]: W0128 01:31:30.832392 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.902138 kubelet[2972]: E0128 01:31:30.864102 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.902138 kubelet[2972]: E0128 01:31:30.865154 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.902138 kubelet[2972]: W0128 01:31:30.865253 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.902138 kubelet[2972]: E0128 01:31:30.865461 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.902138 kubelet[2972]: E0128 01:31:30.886657 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.902138 kubelet[2972]: W0128 01:31:30.886671 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.936270 kubelet[2972]: E0128 01:31:30.922198 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.948198 kubelet[2972]: E0128 01:31:30.948173 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.948333 kubelet[2972]: W0128 01:31:30.948313 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.948813 kubelet[2972]: E0128 01:31:30.948794 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.949883 kubelet[2972]: W0128 01:31:30.949830 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.950227 kubelet[2972]: E0128 01:31:30.950212 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.950304 kubelet[2972]: W0128 01:31:30.950291 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.950371 kubelet[2972]: E0128 01:31:30.950356 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.956137 kubelet[2972]: E0128 01:31:30.956119 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.956226 kubelet[2972]: W0128 01:31:30.956209 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.956305 kubelet[2972]: E0128 01:31:30.956290 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.956462 kubelet[2972]: E0128 01:31:30.956445 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.962803 kubelet[2972]: E0128 01:31:30.959811 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.975211 kubelet[2972]: E0128 01:31:30.975147 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.975211 kubelet[2972]: W0128 01:31:30.975191 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.975379 kubelet[2972]: E0128 01:31:30.975211 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:30.975735 kubelet[2972]: E0128 01:31:30.975699 2972 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:31:30.975735 kubelet[2972]: W0128 01:31:30.975714 2972 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:31:30.975735 kubelet[2972]: E0128 01:31:30.975728 2972 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:31:31.200979 containerd[1622]: time="2026-01-28T01:31:31.200672308Z" level=info msg="StartContainer for \"557eadce342c81172dbc1e4b7ad789f5dfec19d36406bded9e38847da64698db\" returns successfully" Jan 28 01:31:31.555570 kubelet[2972]: E0128 01:31:31.554416 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:31.588984 kubelet[2972]: E0128 01:31:31.561419 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:32.106420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-557eadce342c81172dbc1e4b7ad789f5dfec19d36406bded9e38847da64698db-rootfs.mount: Deactivated successfully. Jan 28 01:31:32.454988 containerd[1622]: time="2026-01-28T01:31:32.453667847Z" level=info msg="shim disconnected" id=557eadce342c81172dbc1e4b7ad789f5dfec19d36406bded9e38847da64698db namespace=k8s.io Jan 28 01:31:32.454988 containerd[1622]: time="2026-01-28T01:31:32.453818181Z" level=warning msg="cleaning up after shim disconnected" id=557eadce342c81172dbc1e4b7ad789f5dfec19d36406bded9e38847da64698db namespace=k8s.io Jan 28 01:31:32.454988 containerd[1622]: time="2026-01-28T01:31:32.453837207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:32.507814 containerd[1622]: time="2026-01-28T01:31:32.506385084Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:31:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:31:32.645197 kubelet[2972]: E0128 01:31:32.640532 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:32.648772 containerd[1622]: time="2026-01-28T01:31:32.648293499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:31:33.557078 kubelet[2972]: E0128 01:31:33.557027 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:34.563095 kubelet[2972]: E0128 01:31:34.561220 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:35.562444 kubelet[2972]: E0128 01:31:35.558254 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:37.565478 kubelet[2972]: E0128 01:31:37.562527 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:37.565478 kubelet[2972]: E0128 01:31:37.570963 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:39.557908 kubelet[2972]: E0128 01:31:39.554972 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:41.555799 kubelet[2972]: E0128 01:31:41.554326 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:43.569256 kubelet[2972]: E0128 01:31:43.568700 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:45.906524 kubelet[2972]: E0128 01:31:45.902436 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:47.555074 kubelet[2972]: E0128 01:31:47.554937 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:48.003875 containerd[1622]: time="2026-01-28T01:31:48.002359088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:48.014479 containerd[1622]: time="2026-01-28T01:31:48.013980831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 01:31:48.022283 containerd[1622]: time="2026-01-28T01:31:48.020960826Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:48.049981 containerd[1622]: time="2026-01-28T01:31:48.049197935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:31:48.057804 containerd[1622]: time="2026-01-28T01:31:48.056348031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 15.40797994s" Jan 28 01:31:48.057804 containerd[1622]: time="2026-01-28T01:31:48.056402515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:31:48.073109 containerd[1622]: time="2026-01-28T01:31:48.073001086Z" level=info msg="CreateContainer within sandbox \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:31:48.204026 containerd[1622]: time="2026-01-28T01:31:48.199755674Z" level=info msg="CreateContainer within sandbox \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"60efb007de1c22ee3f97bb19b93e2b7aa1b1d034cd5ab694a999f1d5416f5082\"" Jan 28 01:31:48.204026 containerd[1622]: time="2026-01-28T01:31:48.201805063Z" level=info msg="StartContainer for \"60efb007de1c22ee3f97bb19b93e2b7aa1b1d034cd5ab694a999f1d5416f5082\"" Jan 28 01:31:48.412758 containerd[1622]: time="2026-01-28T01:31:48.409562814Z" level=info msg="StartContainer for \"60efb007de1c22ee3f97bb19b93e2b7aa1b1d034cd5ab694a999f1d5416f5082\" returns successfully" Jan 28 01:31:48.980105 kubelet[2972]: E0128 01:31:48.979845 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:49.556858 kubelet[2972]: E0128 01:31:49.556701 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:50.003432 kubelet[2972]: E0128 01:31:49.998874 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:51.556790 kubelet[2972]: E0128 01:31:51.554731 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:52.424673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60efb007de1c22ee3f97bb19b93e2b7aa1b1d034cd5ab694a999f1d5416f5082-rootfs.mount: Deactivated successfully. Jan 28 01:31:52.466523 containerd[1622]: time="2026-01-28T01:31:52.464924379Z" level=info msg="shim disconnected" id=60efb007de1c22ee3f97bb19b93e2b7aa1b1d034cd5ab694a999f1d5416f5082 namespace=k8s.io Jan 28 01:31:52.466523 containerd[1622]: time="2026-01-28T01:31:52.464998019Z" level=warning msg="cleaning up after shim disconnected" id=60efb007de1c22ee3f97bb19b93e2b7aa1b1d034cd5ab694a999f1d5416f5082 namespace=k8s.io Jan 28 01:31:52.466523 containerd[1622]: time="2026-01-28T01:31:52.465012876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:52.504738 kubelet[2972]: I0128 01:31:52.503962 2972 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:31:52.882557 kubelet[2972]: I0128 01:31:52.880438 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/293f11a4-1519-4e40-8e4f-23ffad2f9d2d-calico-apiserver-certs\") pod \"calico-apiserver-69686dc768-ln9mw\" (UID: \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\") " pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" Jan 28 01:31:52.882557 kubelet[2972]: I0128 01:31:52.880489 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-backend-key-pair\") pod \"whisker-59c86599d9-sc97f\" (UID: \"7efc6fb0-0d34-4603-98de-2c82b7e71158\") " pod="calico-system/whisker-59c86599d9-sc97f" Jan 28 01:31:52.882557 kubelet[2972]: I0128 01:31:52.880518 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a0975c98-58e0-4afd-9150-95ec5af111e8-goldmane-key-pair\") pod \"goldmane-666569f655-dp6nh\" (UID: \"a0975c98-58e0-4afd-9150-95ec5af111e8\") " pod="calico-system/goldmane-666569f655-dp6nh" Jan 28 01:31:52.882557 kubelet[2972]: I0128 01:31:52.880555 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pz9\" (UniqueName: \"kubernetes.io/projected/7b83327f-83d8-4d0b-8be8-e67980a37b46-kube-api-access-62pz9\") pod \"calico-kube-controllers-f96f445cb-js8kb\" (UID: \"7b83327f-83d8-4d0b-8be8-e67980a37b46\") " pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" Jan 28 01:31:52.882557 kubelet[2972]: I0128 01:31:52.880581 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkb45\" (UniqueName: \"kubernetes.io/projected/a0975c98-58e0-4afd-9150-95ec5af111e8-kube-api-access-zkb45\") pod \"goldmane-666569f655-dp6nh\" (UID: \"a0975c98-58e0-4afd-9150-95ec5af111e8\") " pod="calico-system/goldmane-666569f655-dp6nh" Jan 28 01:31:52.901886 kubelet[2972]: I0128 01:31:52.880676 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25dca920-f21c-49d2-adf9-753622c450d8-calico-apiserver-certs\") pod \"calico-apiserver-69686dc768-5qb5l\" (UID: \"25dca920-f21c-49d2-adf9-753622c450d8\") " pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" Jan 28 01:31:52.901886 kubelet[2972]: I0128 01:31:52.880702 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4s7x\" (UniqueName: \"kubernetes.io/projected/293f11a4-1519-4e40-8e4f-23ffad2f9d2d-kube-api-access-n4s7x\") pod \"calico-apiserver-69686dc768-ln9mw\" (UID: \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\") " pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" Jan 28 01:31:52.901886 kubelet[2972]: I0128 01:31:52.880727 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441fbe90-529b-45d0-b9a6-f443cf214304-config-volume\") pod \"coredns-668d6bf9bc-rt7g9\" (UID: \"441fbe90-529b-45d0-b9a6-f443cf214304\") " pod="kube-system/coredns-668d6bf9bc-rt7g9" Jan 28 01:31:52.901886 kubelet[2972]: I0128 01:31:52.880753 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-ca-bundle\") pod \"whisker-59c86599d9-sc97f\" (UID: \"7efc6fb0-0d34-4603-98de-2c82b7e71158\") " pod="calico-system/whisker-59c86599d9-sc97f" Jan 28 01:31:52.901886 kubelet[2972]: I0128 01:31:52.880777 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7adaa55-8214-45ce-9d9c-4b2fe100270c-config-volume\") pod \"coredns-668d6bf9bc-556k8\" (UID: \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\") " pod="kube-system/coredns-668d6bf9bc-556k8" Jan 28 01:31:52.912384 kubelet[2972]: I0128 01:31:52.880799 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vnv\" (UniqueName: \"kubernetes.io/projected/c7adaa55-8214-45ce-9d9c-4b2fe100270c-kube-api-access-t9vnv\") pod \"coredns-668d6bf9bc-556k8\" (UID: \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\") " pod="kube-system/coredns-668d6bf9bc-556k8" Jan 28 01:31:52.912384 kubelet[2972]: I0128 01:31:52.880825 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b83327f-83d8-4d0b-8be8-e67980a37b46-tigera-ca-bundle\") pod \"calico-kube-controllers-f96f445cb-js8kb\" (UID: \"7b83327f-83d8-4d0b-8be8-e67980a37b46\") " pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" Jan 28 01:31:52.912384 kubelet[2972]: I0128 01:31:52.880843 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0975c98-58e0-4afd-9150-95ec5af111e8-goldmane-ca-bundle\") pod \"goldmane-666569f655-dp6nh\" (UID: \"a0975c98-58e0-4afd-9150-95ec5af111e8\") " pod="calico-system/goldmane-666569f655-dp6nh" Jan 28 01:31:52.912384 kubelet[2972]: I0128 01:31:52.880860 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx52m\" (UniqueName: \"kubernetes.io/projected/25dca920-f21c-49d2-adf9-753622c450d8-kube-api-access-xx52m\") pod \"calico-apiserver-69686dc768-5qb5l\" (UID: \"25dca920-f21c-49d2-adf9-753622c450d8\") " pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" Jan 28 01:31:52.912384 kubelet[2972]: I0128 01:31:52.880875 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvrl\" (UniqueName: \"kubernetes.io/projected/441fbe90-529b-45d0-b9a6-f443cf214304-kube-api-access-rvvrl\") pod \"coredns-668d6bf9bc-rt7g9\" (UID: \"441fbe90-529b-45d0-b9a6-f443cf214304\") " pod="kube-system/coredns-668d6bf9bc-rt7g9" Jan 28 01:31:52.912652 kubelet[2972]: I0128 01:31:52.880888 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcff5\" (UniqueName: \"kubernetes.io/projected/7efc6fb0-0d34-4603-98de-2c82b7e71158-kube-api-access-zcff5\") pod \"whisker-59c86599d9-sc97f\" (UID: \"7efc6fb0-0d34-4603-98de-2c82b7e71158\") " pod="calico-system/whisker-59c86599d9-sc97f" Jan 28 01:31:52.912652 kubelet[2972]: I0128 01:31:52.880901 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0975c98-58e0-4afd-9150-95ec5af111e8-config\") pod \"goldmane-666569f655-dp6nh\" (UID: \"a0975c98-58e0-4afd-9150-95ec5af111e8\") " pod="calico-system/goldmane-666569f655-dp6nh" Jan 28 01:31:53.083660 kubelet[2972]: E0128 01:31:53.081852 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:53.124661 containerd[1622]: time="2026-01-28T01:31:53.096689994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:31:53.151711 kubelet[2972]: E0128 01:31:53.147268 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:53.192206 containerd[1622]: time="2026-01-28T01:31:53.187069436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-556k8,Uid:c7adaa55-8214-45ce-9d9c-4b2fe100270c,Namespace:kube-system,Attempt:0,}" Jan 28 01:31:53.446668 kubelet[2972]: E0128 01:31:53.443028 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:53.505324 containerd[1622]: time="2026-01-28T01:31:53.502824961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-ln9mw,Uid:293f11a4-1519-4e40-8e4f-23ffad2f9d2d,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:31:53.505324 containerd[1622]: time="2026-01-28T01:31:53.503224536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rt7g9,Uid:441fbe90-529b-45d0-b9a6-f443cf214304,Namespace:kube-system,Attempt:0,}" Jan 28 01:31:53.505324 containerd[1622]: time="2026-01-28T01:31:53.504422018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-5qb5l,Uid:25dca920-f21c-49d2-adf9-753622c450d8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:31:53.505324 containerd[1622]: time="2026-01-28T01:31:53.504954365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dp6nh,Uid:a0975c98-58e0-4afd-9150-95ec5af111e8,Namespace:calico-system,Attempt:0,}" Jan 28 01:31:53.505324 containerd[1622]: time="2026-01-28T01:31:53.505161146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c86599d9-sc97f,Uid:7efc6fb0-0d34-4603-98de-2c82b7e71158,Namespace:calico-system,Attempt:0,}" Jan 28 01:31:53.510072 containerd[1622]: time="2026-01-28T01:31:53.509588621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f96f445cb-js8kb,Uid:7b83327f-83d8-4d0b-8be8-e67980a37b46,Namespace:calico-system,Attempt:0,}" Jan 28 01:31:53.612497 containerd[1622]: time="2026-01-28T01:31:53.612198651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gwj5,Uid:b4b5e90d-930c-4b60-ab0a-ec73967e82da,Namespace:calico-system,Attempt:0,}" Jan 28 01:31:54.512879 containerd[1622]: time="2026-01-28T01:31:54.511939290Z" level=error msg="Failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.519285 containerd[1622]: time="2026-01-28T01:31:54.518871987Z" level=error msg="encountered an error cleaning up failed sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.519285 containerd[1622]: time="2026-01-28T01:31:54.519048280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-556k8,Uid:c7adaa55-8214-45ce-9d9c-4b2fe100270c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.521036 kubelet[2972]: E0128 01:31:54.519886 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.521672 kubelet[2972]: E0128 01:31:54.521289 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-556k8" Jan 28 01:31:54.522300 kubelet[2972]: E0128 01:31:54.521958 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-556k8" Jan 28 01:31:54.522300 kubelet[2972]: E0128 01:31:54.522110 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-556k8_kube-system(c7adaa55-8214-45ce-9d9c-4b2fe100270c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-556k8_kube-system(c7adaa55-8214-45ce-9d9c-4b2fe100270c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-556k8" podUID="c7adaa55-8214-45ce-9d9c-4b2fe100270c" Jan 28 01:31:54.528265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657-shm.mount: Deactivated successfully. Jan 28 01:31:54.936914 containerd[1622]: time="2026-01-28T01:31:54.936729225Z" level=error msg="Failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.964302 containerd[1622]: time="2026-01-28T01:31:54.956375921Z" level=error msg="encountered an error cleaning up failed sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.964302 containerd[1622]: time="2026-01-28T01:31:54.957822434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-5qb5l,Uid:25dca920-f21c-49d2-adf9-753622c450d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.964721 kubelet[2972]: E0128 01:31:54.959083 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:54.964721 kubelet[2972]: E0128 01:31:54.959157 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" Jan 28 01:31:54.964721 kubelet[2972]: E0128 01:31:54.959184 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" Jan 28 01:31:54.964881 kubelet[2972]: E0128 01:31:54.959235 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:31:55.126871 kubelet[2972]: I0128 01:31:55.125963 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:31:55.151575 containerd[1622]: time="2026-01-28T01:31:55.148951715Z" level=error msg="Failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.154190 kubelet[2972]: I0128 01:31:55.153692 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:31:55.163979 containerd[1622]: time="2026-01-28T01:31:55.163933127Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:31:55.165949 containerd[1622]: time="2026-01-28T01:31:55.165840802Z" level=error msg="encountered an error cleaning up failed sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.166194 containerd[1622]: time="2026-01-28T01:31:55.166161538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-ln9mw,Uid:293f11a4-1519-4e40-8e4f-23ffad2f9d2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.166835 kubelet[2972]: E0128 01:31:55.166789 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.167202 kubelet[2972]: E0128 01:31:55.167006 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" Jan 28 01:31:55.167202 kubelet[2972]: E0128 01:31:55.167045 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" Jan 28 01:31:55.167202 kubelet[2972]: E0128 01:31:55.167150 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:31:55.169938 containerd[1622]: time="2026-01-28T01:31:55.169858392Z" level=info msg="Ensure that sandbox 7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c in task-service has been cleanup successfully" Jan 28 01:31:55.177199 containerd[1622]: time="2026-01-28T01:31:55.177078135Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:31:55.177816 containerd[1622]: time="2026-01-28T01:31:55.177398470Z" level=info msg="Ensure that sandbox 7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657 in task-service has been cleanup successfully" Jan 28 01:31:55.187571 containerd[1622]: time="2026-01-28T01:31:55.186856448Z" level=error msg="Failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.189390 containerd[1622]: time="2026-01-28T01:31:55.189283724Z" level=error msg="encountered an error cleaning up failed sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.191862 containerd[1622]: time="2026-01-28T01:31:55.189358336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c86599d9-sc97f,Uid:7efc6fb0-0d34-4603-98de-2c82b7e71158,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.192298 kubelet[2972]: E0128 01:31:55.192071 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.192298 kubelet[2972]: E0128 01:31:55.192145 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c86599d9-sc97f" Jan 28 01:31:55.192298 kubelet[2972]: E0128 01:31:55.192181 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c86599d9-sc97f" Jan 28 01:31:55.192565 kubelet[2972]: E0128 01:31:55.192234 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59c86599d9-sc97f_calico-system(7efc6fb0-0d34-4603-98de-2c82b7e71158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59c86599d9-sc97f_calico-system(7efc6fb0-0d34-4603-98de-2c82b7e71158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c86599d9-sc97f" podUID="7efc6fb0-0d34-4603-98de-2c82b7e71158" Jan 28 01:31:55.281278 containerd[1622]: time="2026-01-28T01:31:55.277813747Z" level=error msg="Failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.286694 containerd[1622]: time="2026-01-28T01:31:55.286283833Z" level=error msg="encountered an error cleaning up failed sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.291929 containerd[1622]: time="2026-01-28T01:31:55.287371109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gwj5,Uid:b4b5e90d-930c-4b60-ab0a-ec73967e82da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.293187 kubelet[2972]: E0128 01:31:55.292470 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.293187 kubelet[2972]: E0128 01:31:55.292551 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:55.293187 kubelet[2972]: E0128 01:31:55.292586 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gwj5" Jan 28 01:31:55.299030 kubelet[2972]: E0128 01:31:55.292716 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:55.300903 containerd[1622]: time="2026-01-28T01:31:55.300850552Z" level=error msg="Failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.306032 containerd[1622]: time="2026-01-28T01:31:55.305475114Z" level=error msg="encountered an error cleaning up failed sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.306032 containerd[1622]: time="2026-01-28T01:31:55.305560696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rt7g9,Uid:441fbe90-529b-45d0-b9a6-f443cf214304,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.308340 kubelet[2972]: E0128 01:31:55.308082 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.308340 kubelet[2972]: E0128 01:31:55.308171 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rt7g9" Jan 28 01:31:55.308340 kubelet[2972]: E0128 01:31:55.308206 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rt7g9" Jan 28 01:31:55.308673 kubelet[2972]: E0128 01:31:55.308271 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rt7g9_kube-system(441fbe90-529b-45d0-b9a6-f443cf214304)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rt7g9_kube-system(441fbe90-529b-45d0-b9a6-f443cf214304)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rt7g9" podUID="441fbe90-529b-45d0-b9a6-f443cf214304" Jan 28 01:31:55.327541 containerd[1622]: time="2026-01-28T01:31:55.323890154Z" level=error msg="Failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.330327 containerd[1622]: time="2026-01-28T01:31:55.330278384Z" level=error msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" failed" error="failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.334980 kubelet[2972]: E0128 01:31:55.334454 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:31:55.334980 kubelet[2972]: E0128 01:31:55.334800 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c"} Jan 28 01:31:55.334980 kubelet[2972]: E0128 01:31:55.334904 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25dca920-f21c-49d2-adf9-753622c450d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:55.334980 kubelet[2972]: E0128 01:31:55.334941 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25dca920-f21c-49d2-adf9-753622c450d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:31:55.339862 containerd[1622]: time="2026-01-28T01:31:55.338968461Z" level=error msg="encountered an error cleaning up failed sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.339862 containerd[1622]: time="2026-01-28T01:31:55.339047340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dp6nh,Uid:a0975c98-58e0-4afd-9150-95ec5af111e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.340099 kubelet[2972]: E0128 01:31:55.339249 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.340099 kubelet[2972]: E0128 01:31:55.339304 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dp6nh" Jan 28 01:31:55.340099 kubelet[2972]: E0128 01:31:55.339330 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dp6nh" Jan 28 01:31:55.340237 kubelet[2972]: E0128 01:31:55.339379 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:31:55.344002 containerd[1622]: time="2026-01-28T01:31:55.343958337Z" level=error msg="Failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.348137 containerd[1622]: time="2026-01-28T01:31:55.348089424Z" level=error msg="encountered an error cleaning up failed sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.348365 containerd[1622]: time="2026-01-28T01:31:55.348327995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f96f445cb-js8kb,Uid:7b83327f-83d8-4d0b-8be8-e67980a37b46,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.352517 kubelet[2972]: E0128 01:31:55.349297 2972 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.352517 kubelet[2972]: E0128 01:31:55.349391 2972 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" Jan 28 01:31:55.352517 kubelet[2972]: E0128 01:31:55.349480 2972 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" Jan 28 01:31:55.352780 kubelet[2972]: E0128 01:31:55.349540 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:31:55.356572 containerd[1622]: time="2026-01-28T01:31:55.356397063Z" level=error msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" failed" error="failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:55.357049 kubelet[2972]: E0128 01:31:55.356857 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:31:55.357049 kubelet[2972]: E0128 01:31:55.356919 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657"} Jan 28 01:31:55.357049 kubelet[2972]: E0128 01:31:55.356969 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:55.357049 kubelet[2972]: E0128 01:31:55.357004 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-556k8" podUID="c7adaa55-8214-45ce-9d9c-4b2fe100270c" Jan 28 01:31:55.542576 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3-shm.mount: Deactivated successfully. Jan 28 01:31:55.544852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57-shm.mount: Deactivated successfully. Jan 28 01:31:55.545344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889-shm.mount: Deactivated successfully. Jan 28 01:31:55.545665 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f-shm.mount: Deactivated successfully. Jan 28 01:31:55.545870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c-shm.mount: Deactivated successfully. Jan 28 01:31:56.167433 kubelet[2972]: I0128 01:31:56.167183 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:31:56.170827 containerd[1622]: time="2026-01-28T01:31:56.170784410Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:31:56.172386 containerd[1622]: time="2026-01-28T01:31:56.172266400Z" level=info msg="Ensure that sandbox 281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1 in task-service has been cleanup successfully" Jan 28 01:31:56.174873 kubelet[2972]: I0128 01:31:56.174846 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:31:56.176070 containerd[1622]: time="2026-01-28T01:31:56.175930088Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:31:56.179956 kubelet[2972]: I0128 01:31:56.179932 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:31:56.180795 containerd[1622]: time="2026-01-28T01:31:56.180763309Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:31:56.181149 containerd[1622]: time="2026-01-28T01:31:56.181125394Z" level=info msg="Ensure that sandbox cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889 in task-service has been cleanup successfully" Jan 28 01:31:56.182149 containerd[1622]: time="2026-01-28T01:31:56.181972016Z" level=info msg="Ensure that sandbox d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3 in task-service has been cleanup successfully" Jan 28 01:31:56.187425 kubelet[2972]: I0128 01:31:56.187006 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:31:56.188719 containerd[1622]: time="2026-01-28T01:31:56.188224994Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:31:56.189012 containerd[1622]: time="2026-01-28T01:31:56.188837391Z" level=info msg="Ensure that sandbox edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57 in task-service has been cleanup successfully" Jan 28 01:31:56.205513 kubelet[2972]: I0128 01:31:56.203882 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:31:56.228852 containerd[1622]: time="2026-01-28T01:31:56.216809323Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:31:56.228852 containerd[1622]: time="2026-01-28T01:31:56.217048335Z" level=info msg="Ensure that sandbox 3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec in task-service has been cleanup successfully" Jan 28 01:31:56.228852 containerd[1622]: time="2026-01-28T01:31:56.228387158Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:31:56.229073 kubelet[2972]: I0128 01:31:56.227421 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:31:56.229332 containerd[1622]: time="2026-01-28T01:31:56.229050171Z" level=info msg="Ensure that sandbox 3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f in task-service has been cleanup successfully" Jan 28 01:31:56.399874 containerd[1622]: time="2026-01-28T01:31:56.399744702Z" level=error msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" failed" error="failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:56.400325 kubelet[2972]: E0128 01:31:56.400236 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:31:56.400325 kubelet[2972]: E0128 01:31:56.400303 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3"} Jan 28 01:31:56.400590 kubelet[2972]: E0128 01:31:56.400350 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:56.400590 kubelet[2972]: E0128 01:31:56.400380 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rt7g9" podUID="441fbe90-529b-45d0-b9a6-f443cf214304" Jan 28 01:31:56.444571 containerd[1622]: time="2026-01-28T01:31:56.438885169Z" level=error msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" failed" error="failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:56.445916 kubelet[2972]: E0128 01:31:56.439245 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:31:56.445916 kubelet[2972]: E0128 01:31:56.439307 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57"} Jan 28 01:31:56.445916 kubelet[2972]: E0128 01:31:56.439359 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:56.445916 kubelet[2972]: E0128 01:31:56.439412 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:31:56.456905 containerd[1622]: time="2026-01-28T01:31:56.452924305Z" level=error msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" failed" error="failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:56.456905 containerd[1622]: time="2026-01-28T01:31:56.455075137Z" level=error msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" failed" error="failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:56.457150 kubelet[2972]: E0128 01:31:56.453369 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:31:56.457150 kubelet[2972]: E0128 01:31:56.453504 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1"} Jan 28 01:31:56.457150 kubelet[2972]: E0128 01:31:56.453556 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0975c98-58e0-4afd-9150-95ec5af111e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:56.457150 kubelet[2972]: E0128 01:31:56.453656 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0975c98-58e0-4afd-9150-95ec5af111e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:31:56.457899 kubelet[2972]: E0128 01:31:56.455255 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:31:56.457899 kubelet[2972]: E0128 01:31:56.455304 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889"} Jan 28 01:31:56.457899 kubelet[2972]: E0128 01:31:56.455349 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:56.457899 kubelet[2972]: E0128 01:31:56.455381 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c86599d9-sc97f" podUID="7efc6fb0-0d34-4603-98de-2c82b7e71158" Jan 28 01:31:56.474418 containerd[1622]: time="2026-01-28T01:31:56.470418384Z" level=error msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" failed" error="failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:56.475266 kubelet[2972]: E0128 01:31:56.470863 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:31:56.475266 kubelet[2972]: E0128 01:31:56.470925 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec"} Jan 28 01:31:56.475266 kubelet[2972]: E0128 01:31:56.470978 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:56.475266 kubelet[2972]: E0128 01:31:56.471010 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:31:56.487420 containerd[1622]: time="2026-01-28T01:31:56.486365013Z" level=error msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" failed" error="failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:31:56.487853 kubelet[2972]: E0128 01:31:56.486930 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:31:56.487853 kubelet[2972]: E0128 01:31:56.487005 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f"} Jan 28 01:31:56.487853 kubelet[2972]: E0128 01:31:56.487055 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:31:56.487853 kubelet[2972]: E0128 01:31:56.487093 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:32:08.311725 containerd[1622]: time="2026-01-28T01:32:08.290359945Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:32:08.311725 containerd[1622]: time="2026-01-28T01:32:08.290583958Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:32:08.416124 containerd[1622]: time="2026-01-28T01:32:08.290693926Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:32:08.735689 containerd[1622]: time="2026-01-28T01:32:08.290724073Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:32:08.837696 containerd[1622]: time="2026-01-28T01:32:08.830747835Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:32:09.055496 containerd[1622]: time="2026-01-28T01:32:09.055426751Z" level=error msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" failed" error="failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:09.072117 kubelet[2972]: E0128 01:32:09.057578 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:32:09.077474 kubelet[2972]: E0128 01:32:09.077193 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657"} Jan 28 01:32:09.078282 kubelet[2972]: E0128 01:32:09.078146 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:09.079452 kubelet[2972]: E0128 01:32:09.079191 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-556k8" podUID="c7adaa55-8214-45ce-9d9c-4b2fe100270c" Jan 28 01:32:09.148737 containerd[1622]: time="2026-01-28T01:32:09.147898337Z" level=error msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" failed" error="failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:09.152260 kubelet[2972]: E0128 01:32:09.151052 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:32:09.152260 kubelet[2972]: E0128 01:32:09.151508 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c"} Jan 28 01:32:09.152260 kubelet[2972]: E0128 01:32:09.151564 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25dca920-f21c-49d2-adf9-753622c450d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:09.152260 kubelet[2972]: E0128 01:32:09.151748 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25dca920-f21c-49d2-adf9-753622c450d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:32:09.157094 containerd[1622]: time="2026-01-28T01:32:09.154893243Z" level=error msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" failed" error="failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:09.157197 kubelet[2972]: E0128 01:32:09.155292 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:32:09.157197 kubelet[2972]: E0128 01:32:09.155366 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889"} Jan 28 01:32:09.157197 kubelet[2972]: E0128 01:32:09.155779 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:09.157197 kubelet[2972]: E0128 01:32:09.155863 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c86599d9-sc97f" podUID="7efc6fb0-0d34-4603-98de-2c82b7e71158" Jan 28 01:32:09.160339 containerd[1622]: time="2026-01-28T01:32:09.159353023Z" level=error msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" failed" error="failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:09.160740 kubelet[2972]: E0128 01:32:09.159969 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:09.160740 kubelet[2972]: E0128 01:32:09.160019 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3"} Jan 28 01:32:09.160740 kubelet[2972]: E0128 01:32:09.160062 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:09.160740 kubelet[2972]: E0128 01:32:09.160091 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rt7g9" podUID="441fbe90-529b-45d0-b9a6-f443cf214304" Jan 28 01:32:09.215322 containerd[1622]: time="2026-01-28T01:32:09.213148402Z" level=error msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" failed" error="failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:09.215465 kubelet[2972]: E0128 01:32:09.213428 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:09.215465 kubelet[2972]: E0128 01:32:09.213499 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec"} Jan 28 01:32:09.215465 kubelet[2972]: E0128 01:32:09.213555 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:09.215465 kubelet[2972]: E0128 01:32:09.213588 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:32:10.562960 containerd[1622]: time="2026-01-28T01:32:10.562913493Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:32:10.579574 containerd[1622]: time="2026-01-28T01:32:10.575543652Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:32:10.972445 containerd[1622]: time="2026-01-28T01:32:10.965181934Z" level=error msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" failed" error="failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:10.973483 kubelet[2972]: E0128 01:32:10.965693 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:32:10.973483 kubelet[2972]: E0128 01:32:10.965775 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57"} Jan 28 01:32:10.973483 kubelet[2972]: E0128 01:32:10.965825 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:10.973483 kubelet[2972]: E0128 01:32:10.970976 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:32:10.991228 containerd[1622]: time="2026-01-28T01:32:10.989931286Z" level=error msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" failed" error="failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:10.991392 kubelet[2972]: E0128 01:32:10.990402 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:32:10.991392 kubelet[2972]: E0128 01:32:10.990467 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1"} Jan 28 01:32:10.991392 kubelet[2972]: E0128 01:32:10.990511 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0975c98-58e0-4afd-9150-95ec5af111e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:10.991392 kubelet[2972]: E0128 01:32:10.990542 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0975c98-58e0-4afd-9150-95ec5af111e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:32:11.601164 containerd[1622]: time="2026-01-28T01:32:11.598757418Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:32:12.040172 containerd[1622]: time="2026-01-28T01:32:12.039416280Z" level=error msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" failed" error="failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:12.067820 kubelet[2972]: E0128 01:32:12.051066 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:32:12.067820 kubelet[2972]: E0128 01:32:12.051141 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f"} Jan 28 01:32:12.067820 kubelet[2972]: E0128 01:32:12.051185 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:12.067820 kubelet[2972]: E0128 01:32:12.051223 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:32:19.569017 containerd[1622]: time="2026-01-28T01:32:19.563556502Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:32:19.899993 containerd[1622]: time="2026-01-28T01:32:19.898496250Z" level=error msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" failed" error="failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:19.901041 kubelet[2972]: E0128 01:32:19.899006 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:19.901041 kubelet[2972]: E0128 01:32:19.899085 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec"} Jan 28 01:32:19.901041 kubelet[2972]: E0128 01:32:19.899196 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:19.901041 kubelet[2972]: E0128 01:32:19.899234 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:32:20.580489 containerd[1622]: time="2026-01-28T01:32:20.568517853Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:32:20.893188 containerd[1622]: time="2026-01-28T01:32:20.891784339Z" level=error msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" failed" error="failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:20.893423 kubelet[2972]: E0128 01:32:20.892719 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:20.893423 kubelet[2972]: E0128 01:32:20.892799 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3"} Jan 28 01:32:20.893423 kubelet[2972]: E0128 01:32:20.892846 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:20.893423 kubelet[2972]: E0128 01:32:20.892881 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rt7g9" podUID="441fbe90-529b-45d0-b9a6-f443cf214304" Jan 28 01:32:21.564487 containerd[1622]: time="2026-01-28T01:32:21.557351661Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:32:21.714298 containerd[1622]: time="2026-01-28T01:32:21.713388719Z" level=error msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" failed" error="failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:21.718716 kubelet[2972]: E0128 01:32:21.713683 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:32:21.718716 kubelet[2972]: E0128 01:32:21.713742 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57"} Jan 28 01:32:21.718716 kubelet[2972]: E0128 01:32:21.713790 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:21.718716 kubelet[2972]: E0128 01:32:21.713824 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4b5e90d-930c-4b60-ab0a-ec73967e82da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:32:23.572454 containerd[1622]: time="2026-01-28T01:32:23.569988533Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:32:23.609690 containerd[1622]: time="2026-01-28T01:32:23.574892312Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:32:23.609690 containerd[1622]: time="2026-01-28T01:32:23.593974313Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:32:23.886356 containerd[1622]: time="2026-01-28T01:32:23.884150516Z" level=error msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" failed" error="failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:23.886513 kubelet[2972]: E0128 01:32:23.884872 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:32:23.886513 kubelet[2972]: E0128 01:32:23.884941 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657"} Jan 28 01:32:23.886513 kubelet[2972]: E0128 01:32:23.884984 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:23.886513 kubelet[2972]: E0128 01:32:23.885013 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7adaa55-8214-45ce-9d9c-4b2fe100270c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-556k8" podUID="c7adaa55-8214-45ce-9d9c-4b2fe100270c" Jan 28 01:32:23.908151 containerd[1622]: time="2026-01-28T01:32:23.905096943Z" level=error msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" failed" error="failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:23.908432 kubelet[2972]: E0128 01:32:23.906883 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:32:23.908432 kubelet[2972]: E0128 01:32:23.907161 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889"} Jan 28 01:32:23.908432 kubelet[2972]: E0128 01:32:23.907474 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:23.908432 kubelet[2972]: E0128 01:32:23.907683 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c86599d9-sc97f" podUID="7efc6fb0-0d34-4603-98de-2c82b7e71158" Jan 28 01:32:24.096370 containerd[1622]: time="2026-01-28T01:32:24.092521285Z" level=error msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" failed" error="failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:24.096687 kubelet[2972]: E0128 01:32:24.096059 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:32:24.096687 kubelet[2972]: E0128 01:32:24.096126 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f"} Jan 28 01:32:24.096687 kubelet[2972]: E0128 01:32:24.096170 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:24.096687 kubelet[2972]: E0128 01:32:24.096202 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"293f11a4-1519-4e40-8e4f-23ffad2f9d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:32:24.568662 containerd[1622]: time="2026-01-28T01:32:24.558361725Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:32:24.822191 containerd[1622]: time="2026-01-28T01:32:24.821775073Z" level=error msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" failed" error="failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:24.823089 kubelet[2972]: E0128 01:32:24.822959 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:32:24.823089 kubelet[2972]: E0128 01:32:24.823074 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c"} Jan 28 01:32:24.823217 kubelet[2972]: E0128 01:32:24.823133 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25dca920-f21c-49d2-adf9-753622c450d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:24.823217 kubelet[2972]: E0128 01:32:24.823166 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25dca920-f21c-49d2-adf9-753622c450d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:32:25.556421 containerd[1622]: time="2026-01-28T01:32:25.556144034Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:32:25.733451 containerd[1622]: time="2026-01-28T01:32:25.732749644Z" level=error msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" failed" error="failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:25.733775 kubelet[2972]: E0128 01:32:25.733215 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:32:25.733775 kubelet[2972]: E0128 01:32:25.733337 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1"} Jan 28 01:32:25.733775 kubelet[2972]: E0128 01:32:25.733389 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0975c98-58e0-4afd-9150-95ec5af111e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:25.733775 kubelet[2972]: E0128 01:32:25.733430 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0975c98-58e0-4afd-9150-95ec5af111e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:32:32.562331 kubelet[2972]: E0128 01:32:32.561584 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:32.583676 containerd[1622]: time="2026-01-28T01:32:32.567270942Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:32:32.849973 containerd[1622]: time="2026-01-28T01:32:32.848998366Z" level=error msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" failed" error="failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:32.850111 kubelet[2972]: E0128 01:32:32.849227 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:32.850111 kubelet[2972]: E0128 01:32:32.849295 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3"} Jan 28 01:32:32.850111 kubelet[2972]: E0128 01:32:32.849343 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:32.850111 kubelet[2972]: E0128 01:32:32.849377 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"441fbe90-529b-45d0-b9a6-f443cf214304\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rt7g9" podUID="441fbe90-529b-45d0-b9a6-f443cf214304" Jan 28 01:32:33.553803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581500252.mount: Deactivated successfully. Jan 28 01:32:33.560775 containerd[1622]: time="2026-01-28T01:32:33.560245468Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:32:33.668730 containerd[1622]: time="2026-01-28T01:32:33.668569937Z" level=error msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" failed" error="failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:32:33.684765 kubelet[2972]: E0128 01:32:33.684419 2972 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:33.684765 kubelet[2972]: E0128 01:32:33.684477 2972 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec"} Jan 28 01:32:33.684765 kubelet[2972]: E0128 01:32:33.684570 2972 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:32:33.684765 kubelet[2972]: E0128 01:32:33.684678 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b83327f-83d8-4d0b-8be8-e67980a37b46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:32:33.738039 containerd[1622]: time="2026-01-28T01:32:33.737306991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:33.754213 containerd[1622]: time="2026-01-28T01:32:33.753857242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 01:32:33.766916 containerd[1622]: time="2026-01-28T01:32:33.766702770Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:33.777812 containerd[1622]: time="2026-01-28T01:32:33.777698765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:32:33.782838 containerd[1622]: time="2026-01-28T01:32:33.778875894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 40.682099316s" Jan 28 01:32:33.782838 containerd[1622]: time="2026-01-28T01:32:33.778913054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:32:33.887697 containerd[1622]: time="2026-01-28T01:32:33.885405921Z" level=info msg="CreateContainer within sandbox \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:32:34.043003 containerd[1622]: time="2026-01-28T01:32:34.039815088Z" level=info msg="CreateContainer within sandbox \"674681eec39286316278b09ddad8837421a5a40b0fae5458a8fc85135ee8b83a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775\"" Jan 28 01:32:34.043003 containerd[1622]: time="2026-01-28T01:32:34.041799295Z" level=info msg="StartContainer for \"0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775\"" Jan 28 01:32:34.530701 containerd[1622]: time="2026-01-28T01:32:34.528486733Z" level=info msg="StartContainer for \"0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775\" returns successfully" Jan 28 01:32:35.270000 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:32:35.270153 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:32:35.358786 kubelet[2972]: E0128 01:32:35.358329 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:35.466766 kubelet[2972]: I0128 01:32:35.466479 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dxbgw" podStartSLOduration=3.555539778 podStartE2EDuration="1m22.466449475s" podCreationTimestamp="2026-01-28 01:31:13 +0000 UTC" firstStartedPulling="2026-01-28 01:31:14.886077021 +0000 UTC m=+77.849765286" lastFinishedPulling="2026-01-28 01:32:33.796986718 +0000 UTC m=+156.760674983" observedRunningTime="2026-01-28 01:32:35.465962055 +0000 UTC m=+158.429650340" watchObservedRunningTime="2026-01-28 01:32:35.466449475 +0000 UTC m=+158.430137741" Jan 28 01:32:35.561706 containerd[1622]: time="2026-01-28T01:32:35.561068035Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:32:41.515875 kubelet[2972]: E0128 01:32:41.513232 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:41.527123 kubelet[2972]: E0128 01:32:41.526074 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:41.538034 containerd[1622]: time="2026-01-28T01:32:41.528192491Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:32:41.541725 containerd[1622]: time="2026-01-28T01:32:41.538095993Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:32:41.584444 containerd[1622]: time="2026-01-28T01:32:41.577738858Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:32:41.581246 systemd[1]: run-containerd-runc-k8s.io-0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775-runc.88Kgl9.mount: Deactivated successfully. Jan 28 01:32:41.588922 containerd[1622]: time="2026-01-28T01:32:41.587928042Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:32:41.589312 containerd[1622]: time="2026-01-28T01:32:41.588937052Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:32:41.624030 kubelet[2972]: E0128 01:32:41.621530 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:45.091013 systemd[1]: run-containerd-runc-k8s.io-0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775-runc.qcKCks.mount: Deactivated successfully. Jan 28 01:32:45.612568 containerd[1622]: time="2026-01-28T01:32:45.612007084Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:43.079 [INFO][4828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:43.099 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" iface="eth0" netns="/var/run/netns/cni-763a62a7-514d-2028-f142-684a85add6bc" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:43.099 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" iface="eth0" netns="/var/run/netns/cni-763a62a7-514d-2028-f142-684a85add6bc" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:43.111 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" iface="eth0" netns="/var/run/netns/cni-763a62a7-514d-2028-f142-684a85add6bc" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:43.112 [INFO][4828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:43.112 [INFO][4828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.759 [INFO][4946] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.760 [INFO][4946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.760 [INFO][4946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.831 [WARNING][4946] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.832 [INFO][4946] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.882 [INFO][4946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:45.934477 containerd[1622]: 2026-01-28 01:32:45.906 [INFO][4828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:32:45.934477 containerd[1622]: time="2026-01-28T01:32:45.919752000Z" level=info msg="TearDown network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" successfully" Jan 28 01:32:45.934477 containerd[1622]: time="2026-01-28T01:32:45.920020257Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" returns successfully" Jan 28 01:32:45.921047 systemd[1]: run-netns-cni\x2d763a62a7\x2d514d\x2d2028\x2df142\x2d684a85add6bc.mount: Deactivated successfully. Jan 28 01:32:45.963707 containerd[1622]: time="2026-01-28T01:32:45.961534724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gwj5,Uid:b4b5e90d-930c-4b60-ab0a-ec73967e82da,Namespace:calico-system,Attempt:1,}" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:44.133 [INFO][4920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:44.133 [INFO][4920] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" iface="eth0" netns="/var/run/netns/cni-011d78b1-05c2-3f89-f7e7-bb3543d593ca" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:44.188 [INFO][4920] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" iface="eth0" netns="/var/run/netns/cni-011d78b1-05c2-3f89-f7e7-bb3543d593ca" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:44.188 [INFO][4920] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" iface="eth0" netns="/var/run/netns/cni-011d78b1-05c2-3f89-f7e7-bb3543d593ca" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:44.195 [INFO][4920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:44.195 [INFO][4920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:45.748 [INFO][4975] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:45.760 [INFO][4975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:45.883 [INFO][4975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:45.994 [WARNING][4975] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:45.994 [INFO][4975] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:46.019 [INFO][4975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:46.294259 containerd[1622]: 2026-01-28 01:32:46.215 [INFO][4920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:32:46.294259 containerd[1622]: time="2026-01-28T01:32:46.239459361Z" level=info msg="TearDown network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" successfully" Jan 28 01:32:46.294259 containerd[1622]: time="2026-01-28T01:32:46.239502133Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" returns successfully" Jan 28 01:32:46.233286 systemd[1]: run-netns-cni\x2d011d78b1\x2d05c2\x2d3f89\x2df7e7\x2dbb3543d593ca.mount: Deactivated successfully. Jan 28 01:32:46.345865 containerd[1622]: time="2026-01-28T01:32:46.340153175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-5qb5l,Uid:25dca920-f21c-49d2-adf9-753622c450d8,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:43.580 [INFO][4888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:43.661 [INFO][4888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" iface="eth0" netns="/var/run/netns/cni-22891bfb-bdf8-653f-5e77-000f93ebe3c8" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:43.667 [INFO][4888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" iface="eth0" netns="/var/run/netns/cni-22891bfb-bdf8-653f-5e77-000f93ebe3c8" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:43.679 [INFO][4888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" iface="eth0" netns="/var/run/netns/cni-22891bfb-bdf8-653f-5e77-000f93ebe3c8" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:43.679 [INFO][4888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:43.680 [INFO][4888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:45.761 [INFO][4962] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:45.761 [INFO][4962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:46.018 [INFO][4962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:46.101 [WARNING][4962] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:46.101 [INFO][4962] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:46.168 [INFO][4962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:46.400231 containerd[1622]: 2026-01-28 01:32:46.374 [INFO][4888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:32:46.462723 systemd-journald[1193]: Under memory pressure, flushing caches. Jan 28 01:32:46.421747 systemd-resolved[1501]: Under memory pressure, flushing caches. Jan 28 01:32:46.463539 containerd[1622]: time="2026-01-28T01:32:46.428068239Z" level=info msg="TearDown network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" successfully" Jan 28 01:32:46.463539 containerd[1622]: time="2026-01-28T01:32:46.428114537Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" returns successfully" Jan 28 01:32:46.421869 systemd-resolved[1501]: Flushed all caches. Jan 28 01:32:46.470464 systemd[1]: run-netns-cni\x2d22891bfb\x2dbdf8\x2d653f\x2d5e77\x2d000f93ebe3c8.mount: Deactivated successfully. Jan 28 01:32:46.536119 containerd[1622]: time="2026-01-28T01:32:46.533315515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dp6nh,Uid:a0975c98-58e0-4afd-9150-95ec5af111e8,Namespace:calico-system,Attempt:1,}" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:43.676 [INFO][4868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:43.744 [INFO][4868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" iface="eth0" netns="/var/run/netns/cni-39123a23-7e66-dd15-9b50-1a32a1d4e1bf" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:43.770 [INFO][4868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" iface="eth0" netns="/var/run/netns/cni-39123a23-7e66-dd15-9b50-1a32a1d4e1bf" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:43.773 [INFO][4868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" iface="eth0" netns="/var/run/netns/cni-39123a23-7e66-dd15-9b50-1a32a1d4e1bf" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:43.774 [INFO][4868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:43.775 [INFO][4868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:45.772 [INFO][4964] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:45.773 [INFO][4964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:46.180 [INFO][4964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:46.356 [WARNING][4964] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:46.356 [INFO][4964] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:46.440 [INFO][4964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:46.773139 containerd[1622]: 2026-01-28 01:32:46.621 [INFO][4868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:32:46.893265 containerd[1622]: time="2026-01-28T01:32:46.801176457Z" level=info msg="TearDown network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" successfully" Jan 28 01:32:46.893265 containerd[1622]: time="2026-01-28T01:32:46.801215331Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" returns successfully" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:44.822 [INFO][4917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:44.823 [INFO][4917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" iface="eth0" netns="/var/run/netns/cni-4eae1bb8-1f7d-d58e-1153-f8c3e7bdddf6" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:44.824 [INFO][4917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" iface="eth0" netns="/var/run/netns/cni-4eae1bb8-1f7d-d58e-1153-f8c3e7bdddf6" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:44.827 [INFO][4917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" iface="eth0" netns="/var/run/netns/cni-4eae1bb8-1f7d-d58e-1153-f8c3e7bdddf6" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:44.827 [INFO][4917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:44.828 [INFO][4917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:45.742 [INFO][4993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:45.774 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:46.534 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:46.640 [WARNING][4993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:46.714 [INFO][4993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:46.773 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:46.893265 containerd[1622]: 2026-01-28 01:32:46.795 [INFO][4917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:32:46.893265 containerd[1622]: time="2026-01-28T01:32:46.839763873Z" level=info msg="TearDown network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" successfully" Jan 28 01:32:46.893265 containerd[1622]: time="2026-01-28T01:32:46.839816483Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" returns successfully" Jan 28 01:32:46.893265 containerd[1622]: time="2026-01-28T01:32:46.864020050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-556k8,Uid:c7adaa55-8214-45ce-9d9c-4b2fe100270c,Namespace:kube-system,Attempt:1,}" Jan 28 01:32:46.894118 kubelet[2972]: E0128 01:32:46.808162 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:46.936496 containerd[1622]: time="2026-01-28T01:32:46.901344204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-ln9mw,Uid:293f11a4-1519-4e40-8e4f-23ffad2f9d2d,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:32:47.030837 systemd[1]: run-netns-cni\x2d4eae1bb8\x2d1f7d\x2dd58e\x2d1153\x2df8c3e7bdddf6.mount: Deactivated successfully. Jan 28 01:32:47.062395 systemd[1]: run-netns-cni\x2d39123a23\x2d7e66\x2ddd15\x2d9b50\x2d1a32a1d4e1bf.mount: Deactivated successfully. Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:44.122 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:44.124 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" iface="eth0" netns="/var/run/netns/cni-bdd7c371-8cb5-8823-63e5-a903fc08fa06" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:44.190 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" iface="eth0" netns="/var/run/netns/cni-bdd7c371-8cb5-8823-63e5-a903fc08fa06" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:44.196 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" iface="eth0" netns="/var/run/netns/cni-bdd7c371-8cb5-8823-63e5-a903fc08fa06" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:44.196 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:44.196 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:45.767 [INFO][4973] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:45.777 [INFO][4973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:46.774 [INFO][4973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:46.988 [WARNING][4973] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:46.988 [INFO][4973] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:47.003 [INFO][4973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:47.172456 containerd[1622]: 2026-01-28 01:32:47.117 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:32:47.191893 containerd[1622]: time="2026-01-28T01:32:47.191428723Z" level=info msg="TearDown network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" successfully" Jan 28 01:32:47.191893 containerd[1622]: time="2026-01-28T01:32:47.191483286Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" returns successfully" Jan 28 01:32:47.193142 containerd[1622]: time="2026-01-28T01:32:47.192878588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c86599d9-sc97f,Uid:7efc6fb0-0d34-4603-98de-2c82b7e71158,Namespace:calico-system,Attempt:1,}" Jan 28 01:32:47.286220 systemd[1]: run-netns-cni\x2dbdd7c371\x2d8cb5\x2d8823\x2d63e5\x2da903fc08fa06.mount: Deactivated successfully. Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:46.087 [INFO][5041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:46.087 [INFO][5041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" iface="eth0" netns="/var/run/netns/cni-15b39d4d-9434-5450-a0a9-612b4e2a9def" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:46.124 [INFO][5041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" iface="eth0" netns="/var/run/netns/cni-15b39d4d-9434-5450-a0a9-612b4e2a9def" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:46.124 [INFO][5041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" iface="eth0" netns="/var/run/netns/cni-15b39d4d-9434-5450-a0a9-612b4e2a9def" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:46.124 [INFO][5041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:46.124 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.063 [INFO][5077] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.063 [INFO][5077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.063 [INFO][5077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.125 [WARNING][5077] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.125 [INFO][5077] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.283 [INFO][5077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:47.542774 containerd[1622]: 2026-01-28 01:32:47.455 [INFO][5041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:32:47.557771 systemd[1]: run-netns-cni\x2d15b39d4d\x2d9434\x2d5450\x2da0a9\x2d612b4e2a9def.mount: Deactivated successfully. Jan 28 01:32:47.582065 containerd[1622]: time="2026-01-28T01:32:47.574358792Z" level=info msg="TearDown network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" successfully" Jan 28 01:32:47.582065 containerd[1622]: time="2026-01-28T01:32:47.576128483Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" returns successfully" Jan 28 01:32:47.586238 containerd[1622]: time="2026-01-28T01:32:47.583506786Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:32:47.617917 containerd[1622]: time="2026-01-28T01:32:47.616659562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f96f445cb-js8kb,Uid:7b83327f-83d8-4d0b-8be8-e67980a37b46,Namespace:calico-system,Attempt:1,}" Jan 28 01:32:50.517750 systemd-journald[1193]: Under memory pressure, flushing caches. Jan 28 01:32:50.462747 systemd-resolved[1501]: Under memory pressure, flushing caches. Jan 28 01:32:50.462903 systemd-resolved[1501]: Flushed all caches. Jan 28 01:32:51.297508 systemd-networkd[1276]: cali03a373fb304: Link UP Jan 28 01:32:51.301194 systemd-networkd[1276]: cali03a373fb304: Gained carrier Jan 28 01:32:51.364863 kernel: bpftool[5400]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:47.698 [INFO][5170] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:47.982 [INFO][5170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9gwj5-eth0 csi-node-driver- calico-system b4b5e90d-930c-4b60-ab0a-ec73967e82da 1239 0 2026-01-28 01:31:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9gwj5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali03a373fb304 [] [] }} ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:47.982 [INFO][5170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:49.810 [INFO][5280] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" HandleID="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:49.810 [INFO][5280] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" HandleID="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003834b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9gwj5", "timestamp":"2026-01-28 01:32:49.810380988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:49.810 [INFO][5280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:49.810 [INFO][5280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:49.810 [INFO][5280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.092 [INFO][5280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.398 [INFO][5280] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.467 [INFO][5280] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.562 [INFO][5280] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.624 [INFO][5280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.625 [INFO][5280] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.668 [INFO][5280] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.788 [INFO][5280] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.855 [INFO][5280] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.855 [INFO][5280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" host="localhost" Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.855 [INFO][5280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:51.629034 containerd[1622]: 2026-01-28 01:32:50.855 [INFO][5280] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" HandleID="k8s-pod-network.b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.720272 containerd[1622]: 2026-01-28 01:32:50.896 [INFO][5170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gwj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4b5e90d-930c-4b60-ab0a-ec73967e82da", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9gwj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a373fb304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:51.720272 containerd[1622]: 2026-01-28 01:32:50.906 [INFO][5170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.720272 containerd[1622]: 2026-01-28 01:32:50.906 [INFO][5170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03a373fb304 ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.720272 containerd[1622]: 2026-01-28 01:32:51.380 [INFO][5170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.720272 containerd[1622]: 2026-01-28 01:32:51.386 [INFO][5170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gwj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4b5e90d-930c-4b60-ab0a-ec73967e82da", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f", Pod:"csi-node-driver-9gwj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a373fb304", MAC:"92:ac:a3:26:ba:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:51.720272 containerd[1622]: 2026-01-28 01:32:51.613 [INFO][5170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f" Namespace="calico-system" Pod="csi-node-driver-9gwj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:32:51.748455 systemd-networkd[1276]: cali54950a0a884: Link UP Jan 28 01:32:51.784766 systemd-networkd[1276]: cali54950a0a884: Gained carrier Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:47.802 [INFO][5146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:47.940 [INFO][5146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0 calico-apiserver-69686dc768- calico-apiserver 25dca920-f21c-49d2-adf9-753622c450d8 1247 0 2026-01-28 01:30:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69686dc768 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69686dc768-5qb5l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54950a0a884 [] [] }} ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:47.957 [INFO][5146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:50.099 [INFO][5283] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" HandleID="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:50.099 [INFO][5283] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" HandleID="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000195230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69686dc768-5qb5l", "timestamp":"2026-01-28 01:32:50.099161745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:50.099 [INFO][5283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:50.863 [INFO][5283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:50.863 [INFO][5283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:50.931 [INFO][5283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.024 [INFO][5283] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.085 [INFO][5283] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.121 [INFO][5283] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.178 [INFO][5283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.178 [INFO][5283] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.198 [INFO][5283] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19 Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.265 [INFO][5283] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.493 [INFO][5283] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.493 [INFO][5283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" host="localhost" Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.493 [INFO][5283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:52.104032 containerd[1622]: 2026-01-28 01:32:51.494 [INFO][5283] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" HandleID="k8s-pod-network.e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.110910 containerd[1622]: 2026-01-28 01:32:51.673 [INFO][5146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"25dca920-f21c-49d2-adf9-753622c450d8", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69686dc768-5qb5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54950a0a884", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:52.110910 containerd[1622]: 2026-01-28 01:32:51.674 [INFO][5146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.110910 containerd[1622]: 2026-01-28 01:32:51.674 [INFO][5146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54950a0a884 ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.110910 containerd[1622]: 2026-01-28 01:32:51.831 [INFO][5146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.110910 containerd[1622]: 2026-01-28 01:32:51.872 [INFO][5146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"25dca920-f21c-49d2-adf9-753622c450d8", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19", Pod:"calico-apiserver-69686dc768-5qb5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54950a0a884", MAC:"ba:c9:f7:4f:a7:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:52.110910 containerd[1622]: 2026-01-28 01:32:51.977 [INFO][5146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-5qb5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:32:52.506328 systemd-journald[1193]: Under memory pressure, flushing caches. Jan 28 01:32:52.494464 systemd-resolved[1501]: Under memory pressure, flushing caches. Jan 28 01:32:52.494480 systemd-resolved[1501]: Flushed all caches. Jan 28 01:32:52.503873 systemd-networkd[1276]: cali03a373fb304: Gained IPv6LL Jan 28 01:32:52.511296 containerd[1622]: time="2026-01-28T01:32:52.510840018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:52.512746 containerd[1622]: time="2026-01-28T01:32:52.512370748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:52.512746 containerd[1622]: time="2026-01-28T01:32:52.512397869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:52.512746 containerd[1622]: time="2026-01-28T01:32:52.512531812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:52.571920 systemd-networkd[1276]: cali689a03405bd: Link UP Jan 28 01:32:52.581177 systemd-networkd[1276]: cali689a03405bd: Gained carrier Jan 28 01:32:52.642541 containerd[1622]: time="2026-01-28T01:32:52.639394059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:52.642541 containerd[1622]: time="2026-01-28T01:32:52.639505239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:52.642541 containerd[1622]: time="2026-01-28T01:32:52.639530287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:52.642541 containerd[1622]: time="2026-01-28T01:32:52.639820437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:48.012 [INFO][5204] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:48.111 [INFO][5204] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0 calico-apiserver-69686dc768- calico-apiserver 293f11a4-1519-4e40-8e4f-23ffad2f9d2d 1249 0 2026-01-28 01:30:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69686dc768 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69686dc768-ln9mw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali689a03405bd [] [] }} ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:48.160 [INFO][5204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:50.160 [INFO][5300] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" HandleID="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:50.165 [INFO][5300] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" HandleID="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002deeb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69686dc768-ln9mw", "timestamp":"2026-01-28 01:32:50.160403379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:50.203 [INFO][5300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:51.494 [INFO][5300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:51.494 [INFO][5300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:51.799 [INFO][5300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.013 [INFO][5300] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.194 [INFO][5300] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.306 [INFO][5300] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.335 [INFO][5300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.335 [INFO][5300] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.342 [INFO][5300] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.412 [INFO][5300] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.438 [INFO][5300] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.438 [INFO][5300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" host="localhost" Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.438 [INFO][5300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:52.835898 containerd[1622]: 2026-01-28 01:32:52.438 [INFO][5300] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" HandleID="k8s-pod-network.6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.864302 containerd[1622]: 2026-01-28 01:32:52.513 [INFO][5204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"293f11a4-1519-4e40-8e4f-23ffad2f9d2d", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69686dc768-ln9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali689a03405bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:52.864302 containerd[1622]: 2026-01-28 01:32:52.523 [INFO][5204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.864302 containerd[1622]: 2026-01-28 01:32:52.526 [INFO][5204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali689a03405bd ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.864302 containerd[1622]: 2026-01-28 01:32:52.573 [INFO][5204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.864302 containerd[1622]: 2026-01-28 01:32:52.577 [INFO][5204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"293f11a4-1519-4e40-8e4f-23ffad2f9d2d", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b", Pod:"calico-apiserver-69686dc768-ln9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali689a03405bd", MAC:"0e:9c:b2:e4:3d:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:52.864302 containerd[1622]: 2026-01-28 01:32:52.703 [INFO][5204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b" Namespace="calico-apiserver" Pod="calico-apiserver-69686dc768-ln9mw" WorkloadEndpoint="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:32:52.932939 systemd[1]: run-containerd-runc-k8s.io-e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19-runc.wGOxfP.mount: Deactivated successfully. Jan 28 01:32:53.076333 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:53.197795 systemd-networkd[1276]: cali54950a0a884: Gained IPv6LL Jan 28 01:32:53.365096 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:53.411730 containerd[1622]: time="2026-01-28T01:32:53.405587341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:53.424460 containerd[1622]: time="2026-01-28T01:32:53.408507941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:53.424460 containerd[1622]: time="2026-01-28T01:32:53.423336799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:53.424460 containerd[1622]: time="2026-01-28T01:32:53.423522180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:53.621911 systemd-networkd[1276]: cali20696a0eedb: Link UP Jan 28 01:32:53.633521 containerd[1622]: time="2026-01-28T01:32:53.629044505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-5qb5l,Uid:25dca920-f21c-49d2-adf9-753622c450d8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19\"" Jan 28 01:32:53.686822 systemd-networkd[1276]: cali20696a0eedb: Gained carrier Jan 28 01:32:53.781686 containerd[1622]: time="2026-01-28T01:32:53.780106748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:32:53.989001 systemd[1]: run-containerd-runc-k8s.io-6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b-runc.wu4noi.mount: Deactivated successfully. Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:48.418 [INFO][5211] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:48.996 [INFO][5211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59c86599d9--sc97f-eth0 whisker-59c86599d9- calico-system 7efc6fb0-0d34-4603-98de-2c82b7e71158 1246 0 2026-01-28 01:31:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59c86599d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59c86599d9-sc97f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali20696a0eedb [] [] }} ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:49.005 [INFO][5211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:50.221 [INFO][5319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:50.227 [INFO][5319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d6290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59c86599d9-sc97f", "timestamp":"2026-01-28 01:32:50.221357693 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:50.230 [INFO][5319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:52.438 [INFO][5319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:52.439 [INFO][5319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:52.631 [INFO][5319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:52.939 [INFO][5319] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.074 [INFO][5319] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.162 [INFO][5319] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.190 [INFO][5319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.190 [INFO][5319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.297 [INFO][5319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.331 [INFO][5319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.414 [INFO][5319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.414 [INFO][5319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" host="localhost" Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.414 [INFO][5319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:54.021499 containerd[1622]: 2026-01-28 01:32:53.415 [INFO][5319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.027495 containerd[1622]: 2026-01-28 01:32:53.494 [INFO][5211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59c86599d9--sc97f-eth0", GenerateName:"whisker-59c86599d9-", Namespace:"calico-system", SelfLink:"", UID:"7efc6fb0-0d34-4603-98de-2c82b7e71158", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59c86599d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59c86599d9-sc97f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20696a0eedb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:54.027495 containerd[1622]: 2026-01-28 01:32:53.494 [INFO][5211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.027495 containerd[1622]: 2026-01-28 01:32:53.494 [INFO][5211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20696a0eedb ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.027495 containerd[1622]: 2026-01-28 01:32:53.672 [INFO][5211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.027495 containerd[1622]: 2026-01-28 01:32:53.672 [INFO][5211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59c86599d9--sc97f-eth0", GenerateName:"whisker-59c86599d9-", Namespace:"calico-system", SelfLink:"", UID:"7efc6fb0-0d34-4603-98de-2c82b7e71158", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59c86599d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e", Pod:"whisker-59c86599d9-sc97f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20696a0eedb", MAC:"32:bc:9a:15:ac:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:54.027495 containerd[1622]: 2026-01-28 01:32:53.901 [INFO][5211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Namespace="calico-system" Pod="whisker-59c86599d9-sc97f" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:32:54.066855 containerd[1622]: time="2026-01-28T01:32:54.066804009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gwj5,Uid:b4b5e90d-930c-4b60-ab0a-ec73967e82da,Namespace:calico-system,Attempt:1,} returns sandbox id \"b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f\"" Jan 28 01:32:54.106513 containerd[1622]: time="2026-01-28T01:32:54.106137004Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:54.108443 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:54.170834 containerd[1622]: time="2026-01-28T01:32:54.115478279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:32:54.171281 containerd[1622]: time="2026-01-28T01:32:54.115886041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:32:54.171981 kubelet[2972]: E0128 01:32:54.171931 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:32:54.173857 kubelet[2972]: E0128 01:32:54.172936 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:32:54.180860 kubelet[2972]: E0128 01:32:54.180461 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx52m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:54.184289 containerd[1622]: time="2026-01-28T01:32:54.182919855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:32:54.185134 kubelet[2972]: E0128 01:32:54.183727 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:32:54.293517 systemd-networkd[1276]: cali689a03405bd: Gained IPv6LL Jan 28 01:32:54.494848 containerd[1622]: time="2026-01-28T01:32:54.492031758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:54.494848 containerd[1622]: time="2026-01-28T01:32:54.492093666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:54.494848 containerd[1622]: time="2026-01-28T01:32:54.492108916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:54.495571 containerd[1622]: time="2026-01-28T01:32:54.495488216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:54.507362 containerd[1622]: time="2026-01-28T01:32:54.507325187Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:54.513589 containerd[1622]: time="2026-01-28T01:32:54.513539508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:32:54.513950 containerd[1622]: time="2026-01-28T01:32:54.513908607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:32:54.515153 kubelet[2972]: E0128 01:32:54.514284 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:32:54.515153 kubelet[2972]: E0128 01:32:54.514342 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:32:54.515153 kubelet[2972]: E0128 01:32:54.514468 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:54.514841 systemd-networkd[1276]: calib067567a374: Link UP Jan 28 01:32:54.531905 containerd[1622]: time="2026-01-28T01:32:54.531872424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:32:54.566958 systemd-networkd[1276]: calib067567a374: Gained carrier Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:48.602 [INFO][5191] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:48.866 [INFO][5191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--556k8-eth0 coredns-668d6bf9bc- kube-system c7adaa55-8214-45ce-9d9c-4b2fe100270c 1244 0 2026-01-28 01:29:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-556k8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib067567a374 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:48.866 [INFO][5191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:50.432 [INFO][5326] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" HandleID="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:50.453 [INFO][5326] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" HandleID="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000201720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-556k8", "timestamp":"2026-01-28 01:32:50.432219296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:50.457 [INFO][5326] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:53.416 [INFO][5326] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:53.424 [INFO][5326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:53.517 [INFO][5326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:53.874 [INFO][5326] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:53.960 [INFO][5326] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.018 [INFO][5326] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.057 [INFO][5326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.058 [INFO][5326] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.133 [INFO][5326] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.204 [INFO][5326] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.362 [INFO][5326] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.362 [INFO][5326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" host="localhost" Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.362 [INFO][5326] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:54.734028 containerd[1622]: 2026-01-28 01:32:54.362 [INFO][5326] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" HandleID="k8s-pod-network.e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.735100 containerd[1622]: 2026-01-28 01:32:54.434 [INFO][5191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--556k8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c7adaa55-8214-45ce-9d9c-4b2fe100270c", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-556k8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib067567a374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:54.735100 containerd[1622]: 2026-01-28 01:32:54.435 [INFO][5191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.735100 containerd[1622]: 2026-01-28 01:32:54.435 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib067567a374 ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.735100 containerd[1622]: 2026-01-28 01:32:54.526 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.735100 containerd[1622]: 2026-01-28 01:32:54.528 [INFO][5191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--556k8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c7adaa55-8214-45ce-9d9c-4b2fe100270c", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a", Pod:"coredns-668d6bf9bc-556k8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib067567a374", MAC:"fe:67:7e:9c:77:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:54.735100 containerd[1622]: 2026-01-28 01:32:54.708 [INFO][5191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a" Namespace="kube-system" Pod="coredns-668d6bf9bc-556k8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:32:54.812583 systemd-networkd[1276]: vxlan.calico: Link UP Jan 28 01:32:54.812695 systemd-networkd[1276]: vxlan.calico: Gained carrier Jan 28 01:32:54.836565 containerd[1622]: time="2026-01-28T01:32:54.836515653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69686dc768-ln9mw,Uid:293f11a4-1519-4e40-8e4f-23ffad2f9d2d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b\"" Jan 28 01:32:55.118984 systemd-networkd[1276]: cali20696a0eedb: Gained IPv6LL Jan 28 01:32:55.391768 containerd[1622]: time="2026-01-28T01:32:55.385731092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:55.397444 containerd[1622]: time="2026-01-28T01:32:55.392457243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:55.397444 containerd[1622]: time="2026-01-28T01:32:55.392538087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:55.397444 containerd[1622]: time="2026-01-28T01:32:55.392577220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:55.397444 containerd[1622]: time="2026-01-28T01:32:55.392821393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:55.516454 containerd[1622]: time="2026-01-28T01:32:55.492949236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:32:55.516454 containerd[1622]: time="2026-01-28T01:32:55.493150768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:32:55.517028 kubelet[2972]: E0128 01:32:55.516970 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:32:55.536951 kubelet[2972]: E0128 01:32:55.517732 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:32:55.536951 kubelet[2972]: E0128 01:32:55.518011 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:55.536951 kubelet[2972]: E0128 01:32:55.536534 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:32:55.538079 containerd[1622]: time="2026-01-28T01:32:55.538042624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:32:55.792887 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:55.941065 containerd[1622]: time="2026-01-28T01:32:55.940164598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:55.983420 containerd[1622]: time="2026-01-28T01:32:55.977057331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:32:55.983420 containerd[1622]: time="2026-01-28T01:32:55.979388977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:32:55.983791 kubelet[2972]: E0128 01:32:55.977469 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:32:55.983791 kubelet[2972]: E0128 01:32:55.977538 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:32:55.983791 kubelet[2972]: E0128 01:32:55.977777 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4s7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:55.988889 kubelet[2972]: E0128 01:32:55.984330 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:32:56.111778 kubelet[2972]: E0128 01:32:56.105061 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:49.861 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:49.865 [INFO][5258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" iface="eth0" netns="/var/run/netns/cni-e9432699-6cc4-4156-591e-74eae2622591" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:49.866 [INFO][5258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" iface="eth0" netns="/var/run/netns/cni-e9432699-6cc4-4156-591e-74eae2622591" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:49.867 [INFO][5258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" iface="eth0" netns="/var/run/netns/cni-e9432699-6cc4-4156-591e-74eae2622591" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:49.867 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:49.867 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:50.724 [INFO][5336] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:50.724 [INFO][5336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:55.475 [INFO][5336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:55.870 [WARNING][5336] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:55.870 [INFO][5336] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:55.951 [INFO][5336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:56.140765 containerd[1622]: 2026-01-28 01:32:56.127 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:32:56.206476 systemd[1]: run-netns-cni\x2de9432699\x2d6cc4\x2d4156\x2d591e\x2d74eae2622591.mount: Deactivated successfully. Jan 28 01:32:56.264875 containerd[1622]: time="2026-01-28T01:32:56.238485336Z" level=info msg="TearDown network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" successfully" Jan 28 01:32:56.264875 containerd[1622]: time="2026-01-28T01:32:56.238586538Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" returns successfully" Jan 28 01:32:56.264875 containerd[1622]: time="2026-01-28T01:32:56.263038668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rt7g9,Uid:441fbe90-529b-45d0-b9a6-f443cf214304,Namespace:kube-system,Attempt:1,}" Jan 28 01:32:56.265199 kubelet[2972]: E0128 01:32:56.248549 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:56.278928 systemd-networkd[1276]: cali91aaaf9e015: Link UP Jan 28 01:32:56.294446 systemd-networkd[1276]: cali91aaaf9e015: Gained carrier Jan 28 01:32:56.403431 systemd-networkd[1276]: calib067567a374: Gained IPv6LL Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:47.973 [INFO][5175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:48.289 [INFO][5175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--dp6nh-eth0 goldmane-666569f655- calico-system a0975c98-58e0-4afd-9150-95ec5af111e8 1240 0 2026-01-28 01:31:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-dp6nh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali91aaaf9e015 [] [] }} ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:48.289 [INFO][5175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:50.505 [INFO][5304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" HandleID="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:50.506 [INFO][5304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" HandleID="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001384a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-dp6nh", "timestamp":"2026-01-28 01:32:50.505786618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:50.506 [INFO][5304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:54.378 [INFO][5304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:54.379 [INFO][5304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:54.480 [INFO][5304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:54.668 [INFO][5304] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:54.832 [INFO][5304] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:54.889 [INFO][5304] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.041 [INFO][5304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.041 [INFO][5304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.224 [INFO][5304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08 Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.310 [INFO][5304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.454 [INFO][5304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.454 [INFO][5304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" host="localhost" Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.474 [INFO][5304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:56.421524 containerd[1622]: 2026-01-28 01:32:55.474 [INFO][5304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" HandleID="k8s-pod-network.8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.439454 containerd[1622]: 2026-01-28 01:32:55.665 [INFO][5175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dp6nh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a0975c98-58e0-4afd-9150-95ec5af111e8", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-dp6nh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali91aaaf9e015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:56.439454 containerd[1622]: 2026-01-28 01:32:55.665 [INFO][5175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.439454 containerd[1622]: 2026-01-28 01:32:55.665 [INFO][5175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91aaaf9e015 ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.439454 containerd[1622]: 2026-01-28 01:32:56.266 [INFO][5175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.439454 containerd[1622]: 2026-01-28 01:32:56.271 [INFO][5175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dp6nh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a0975c98-58e0-4afd-9150-95ec5af111e8", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08", Pod:"goldmane-666569f655-dp6nh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali91aaaf9e015", MAC:"3e:d0:e5:19:91:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:56.439454 containerd[1622]: 2026-01-28 01:32:56.373 [INFO][5175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08" Namespace="calico-system" Pod="goldmane-666569f655-dp6nh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:32:56.664545 systemd-networkd[1276]: vxlan.calico: Gained IPv6LL Jan 28 01:32:56.685751 kubelet[2972]: E0128 01:32:56.678745 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:32:56.689678 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:56.704890 kubelet[2972]: E0128 01:32:56.704824 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:32:56.837167 containerd[1622]: time="2026-01-28T01:32:56.837010889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c86599d9-sc97f,Uid:7efc6fb0-0d34-4603-98de-2c82b7e71158,Namespace:calico-system,Attempt:1,} returns sandbox id \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\"" Jan 28 01:32:56.916941 containerd[1622]: time="2026-01-28T01:32:56.911242413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:32:57.025064 systemd-networkd[1276]: calia268763d958: Link UP Jan 28 01:32:57.077912 systemd-networkd[1276]: calia268763d958: Gained carrier Jan 28 01:32:57.121576 containerd[1622]: time="2026-01-28T01:32:57.115096824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:57.121576 containerd[1622]: time="2026-01-28T01:32:57.115165563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:57.121576 containerd[1622]: time="2026-01-28T01:32:57.115181183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:57.121576 containerd[1622]: time="2026-01-28T01:32:57.115363979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:50.121 [INFO][5256] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:50.537 [INFO][5256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0 calico-kube-controllers-f96f445cb- calico-system 7b83327f-83d8-4d0b-8be8-e67980a37b46 1262 0 2026-01-28 01:31:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f96f445cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f96f445cb-js8kb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia268763d958 [] [] }} ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:50.537 [INFO][5256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:51.141 [INFO][5376] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" HandleID="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:51.142 [INFO][5376] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" HandleID="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042f870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f96f445cb-js8kb", "timestamp":"2026-01-28 01:32:51.137011657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:51.142 [INFO][5376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:55.971 [INFO][5376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:55.972 [INFO][5376] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.265 [INFO][5376] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.511 [INFO][5376] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.598 [INFO][5376] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.689 [INFO][5376] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.733 [INFO][5376] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.733 [INFO][5376] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.756 [INFO][5376] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.805 [INFO][5376] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.898 [INFO][5376] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.898 [INFO][5376] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" host="localhost" Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.898 [INFO][5376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:57.323170 containerd[1622]: 2026-01-28 01:32:56.898 [INFO][5376] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" HandleID="k8s-pod-network.dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.324246 containerd[1622]: 2026-01-28 01:32:56.937 [INFO][5256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0", GenerateName:"calico-kube-controllers-f96f445cb-", Namespace:"calico-system", SelfLink:"", UID:"7b83327f-83d8-4d0b-8be8-e67980a37b46", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f96f445cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f96f445cb-js8kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia268763d958", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:57.324246 containerd[1622]: 2026-01-28 01:32:56.937 [INFO][5256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.324246 containerd[1622]: 2026-01-28 01:32:56.937 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia268763d958 ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.324246 containerd[1622]: 2026-01-28 01:32:57.121 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.324246 containerd[1622]: 2026-01-28 01:32:57.139 [INFO][5256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0", GenerateName:"calico-kube-controllers-f96f445cb-", Namespace:"calico-system", SelfLink:"", UID:"7b83327f-83d8-4d0b-8be8-e67980a37b46", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f96f445cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f", Pod:"calico-kube-controllers-f96f445cb-js8kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia268763d958", MAC:"02:66:10:f3:18:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:57.324246 containerd[1622]: 2026-01-28 01:32:57.267 [INFO][5256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f" Namespace="calico-system" Pod="calico-kube-controllers-f96f445cb-js8kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:32:57.338396 containerd[1622]: time="2026-01-28T01:32:57.338346187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-556k8,Uid:c7adaa55-8214-45ce-9d9c-4b2fe100270c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a\"" Jan 28 01:32:57.357000 kubelet[2972]: E0128 01:32:57.356899 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:57.366761 systemd-networkd[1276]: cali91aaaf9e015: Gained IPv6LL Jan 28 01:32:57.429880 containerd[1622]: time="2026-01-28T01:32:57.426037722Z" level=info msg="CreateContainer within sandbox \"e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:32:57.441986 containerd[1622]: time="2026-01-28T01:32:57.441769224Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:57.509446 containerd[1622]: time="2026-01-28T01:32:57.509185902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:32:57.522436 kubelet[2972]: E0128 01:32:57.510158 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:32:57.522805 containerd[1622]: time="2026-01-28T01:32:57.509910694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:32:57.522883 kubelet[2972]: E0128 01:32:57.522719 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:32:57.617261 kubelet[2972]: E0128 01:32:57.523269 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcff5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59c86599d9-sc97f_calico-system(7efc6fb0-0d34-4603-98de-2c82b7e71158): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:57.631447 containerd[1622]: time="2026-01-28T01:32:57.629928666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:32:57.637544 systemd[1]: run-containerd-runc-k8s.io-8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08-runc.WqKO45.mount: Deactivated successfully. Jan 28 01:32:57.684386 containerd[1622]: time="2026-01-28T01:32:57.679502674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:32:57.684386 containerd[1622]: time="2026-01-28T01:32:57.679714465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:32:57.684386 containerd[1622]: time="2026-01-28T01:32:57.679735855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:57.684386 containerd[1622]: time="2026-01-28T01:32:57.679894887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:32:57.883666 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:58.086426 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:32:58.161827 containerd[1622]: time="2026-01-28T01:32:58.138983094Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:58.180298 containerd[1622]: time="2026-01-28T01:32:58.171058497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:32:58.180298 containerd[1622]: time="2026-01-28T01:32:58.171208531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:32:58.180559 kubelet[2972]: E0128 01:32:58.171531 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:32:58.180559 kubelet[2972]: E0128 01:32:58.171675 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:32:58.180559 kubelet[2972]: E0128 01:32:58.171839 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcff5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59c86599d9-sc97f_calico-system(7efc6fb0-0d34-4603-98de-2c82b7e71158): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:58.180559 kubelet[2972]: E0128 01:32:58.177108 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59c86599d9-sc97f" podUID="7efc6fb0-0d34-4603-98de-2c82b7e71158" Jan 28 01:32:58.221235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015253193.mount: Deactivated successfully. Jan 28 01:32:58.269541 containerd[1622]: time="2026-01-28T01:32:58.267178329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dp6nh,Uid:a0975c98-58e0-4afd-9150-95ec5af111e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08\"" Jan 28 01:32:58.310876 containerd[1622]: time="2026-01-28T01:32:58.292839290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:32:58.340465 containerd[1622]: time="2026-01-28T01:32:58.331313081Z" level=info msg="CreateContainer within sandbox \"e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efd0425639b82f1ebf9b6b43b15d313f6b37f1a1d00062c515ff17cd0058f4cd\"" Jan 28 01:32:58.404294 containerd[1622]: time="2026-01-28T01:32:58.386219038Z" level=info msg="StartContainer for \"efd0425639b82f1ebf9b6b43b15d313f6b37f1a1d00062c515ff17cd0058f4cd\"" Jan 28 01:32:58.662430 kubelet[2972]: E0128 01:32:58.643023 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:58.710112 systemd-networkd[1276]: cali59b88363bbc: Link UP Jan 28 01:32:58.714208 systemd-networkd[1276]: cali59b88363bbc: Gained carrier Jan 28 01:32:58.777761 containerd[1622]: time="2026-01-28T01:32:58.773092319Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:32:58.778911 containerd[1622]: time="2026-01-28T01:32:58.778869464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f96f445cb-js8kb,Uid:7b83327f-83d8-4d0b-8be8-e67980a37b46,Namespace:calico-system,Attempt:1,} returns sandbox id \"dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f\"" Jan 28 01:32:58.810510 containerd[1622]: time="2026-01-28T01:32:58.804980547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:58.871251 containerd[1622]: time="2026-01-28T01:32:58.868917424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:32:58.871251 containerd[1622]: time="2026-01-28T01:32:58.869048342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:32:58.908289 kubelet[2972]: E0128 01:32:58.904287 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:32:58.908289 kubelet[2972]: E0128 01:32:58.904442 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:32:58.908289 kubelet[2972]: E0128 01:32:58.906899 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkb45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:58.915554 containerd[1622]: time="2026-01-28T01:32:58.913861424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:32:58.915772 kubelet[2972]: E0128 01:32:58.914271 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:56.967 [INFO][5684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0 coredns-668d6bf9bc- kube-system 441fbe90-529b-45d0-b9a6-f443cf214304 1271 0 2026-01-28 01:29:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rt7g9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59b88363bbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:56.967 [INFO][5684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.602 [INFO][5744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" HandleID="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.603 [INFO][5744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" HandleID="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000474e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rt7g9", "timestamp":"2026-01-28 01:32:57.602119048 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.603 [INFO][5744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.603 [INFO][5744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.603 [INFO][5744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.716 [INFO][5744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:57.911 [INFO][5744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.090 [INFO][5744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.120 [INFO][5744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.134 [INFO][5744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.134 [INFO][5744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.176 [INFO][5744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35 Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.259 [INFO][5744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.361 [INFO][5744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.361 [INFO][5744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" host="localhost" Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.362 [INFO][5744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:32:59.020828 containerd[1622]: 2026-01-28 01:32:58.362 [INFO][5744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" HandleID="k8s-pod-network.e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.034779 containerd[1622]: 2026-01-28 01:32:58.523 [INFO][5684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441fbe90-529b-45d0-b9a6-f443cf214304", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rt7g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59b88363bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:59.034779 containerd[1622]: 2026-01-28 01:32:58.524 [INFO][5684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.034779 containerd[1622]: 2026-01-28 01:32:58.524 [INFO][5684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59b88363bbc ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.034779 containerd[1622]: 2026-01-28 01:32:58.840 [INFO][5684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.034779 containerd[1622]: 2026-01-28 01:32:58.842 [INFO][5684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441fbe90-529b-45d0-b9a6-f443cf214304", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35", Pod:"coredns-668d6bf9bc-rt7g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59b88363bbc", MAC:"3e:8a:8a:c9:21:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:32:59.034779 containerd[1622]: 2026-01-28 01:32:58.974 [INFO][5684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35" Namespace="kube-system" Pod="coredns-668d6bf9bc-rt7g9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:32:59.103255 systemd-networkd[1276]: calia268763d958: Gained IPv6LL Jan 28 01:32:59.190157 kubelet[2972]: E0128 01:32:59.182276 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:32:59.296415 systemd[1]: run-containerd-runc-k8s.io-efd0425639b82f1ebf9b6b43b15d313f6b37f1a1d00062c515ff17cd0058f4cd-runc.vKthXB.mount: Deactivated successfully. Jan 28 01:32:59.496260 containerd[1622]: time="2026-01-28T01:32:59.411274554Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:32:59.597863 containerd[1622]: time="2026-01-28T01:32:59.580180385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:32:59.689306 containerd[1622]: time="2026-01-28T01:32:59.598102398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:32:59.689306 containerd[1622]: time="2026-01-28T01:32:59.631718332Z" level=info msg="StopPodSandbox for \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\"" Jan 28 01:32:59.716667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e-shm.mount: Deactivated successfully. Jan 28 01:32:59.799764 systemd-networkd[1276]: cali59b88363bbc: Gained IPv6LL Jan 28 01:32:59.816103 kubelet[2972]: E0128 01:32:59.804020 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:32:59.816103 kubelet[2972]: E0128 01:32:59.804088 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:32:59.816103 kubelet[2972]: E0128 01:32:59.804239 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:32:59.816103 kubelet[2972]: E0128 01:32:59.813515 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:33:00.570711 kubelet[2972]: E0128 01:33:00.560537 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:00.688261 containerd[1622]: time="2026-01-28T01:33:00.638103323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:33:00.688261 containerd[1622]: time="2026-01-28T01:33:00.638196921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:33:00.688261 containerd[1622]: time="2026-01-28T01:33:00.638217289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:33:00.688261 containerd[1622]: time="2026-01-28T01:33:00.638362985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:33:00.980145 kubelet[2972]: E0128 01:33:00.979190 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:33:01.044882 containerd[1622]: time="2026-01-28T01:33:01.040091635Z" level=info msg="StartContainer for \"efd0425639b82f1ebf9b6b43b15d313f6b37f1a1d00062c515ff17cd0058f4cd\" returns successfully" Jan 28 01:33:01.096879 kubelet[2972]: E0128 01:33:01.075817 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:33:01.096774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e-rootfs.mount: Deactivated successfully. Jan 28 01:33:01.134230 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:33:01.243060 containerd[1622]: time="2026-01-28T01:33:01.241938598Z" level=info msg="shim disconnected" id=d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e namespace=k8s.io Jan 28 01:33:01.243060 containerd[1622]: time="2026-01-28T01:33:01.242078172Z" level=warning msg="cleaning up after shim disconnected" id=d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e namespace=k8s.io Jan 28 01:33:01.243060 containerd[1622]: time="2026-01-28T01:33:01.242092058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:00.314 [WARNING][5857] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dp6nh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a0975c98-58e0-4afd-9150-95ec5af111e8", ResourceVersion:"1354", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08", Pod:"goldmane-666569f655-dp6nh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali91aaaf9e015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:00.370 [INFO][5857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:00.371 [INFO][5857] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" iface="eth0" netns="" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:00.371 [INFO][5857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:00.371 [INFO][5857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.190 [INFO][5913] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.191 [INFO][5913] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.192 [INFO][5913] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.315 [WARNING][5913] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.315 [INFO][5913] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.367 [INFO][5913] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:01.437754 containerd[1622]: 2026-01-28 01:33:01.396 [INFO][5857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:01.442349 containerd[1622]: time="2026-01-28T01:33:01.439215426Z" level=info msg="TearDown network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" successfully" Jan 28 01:33:01.442349 containerd[1622]: time="2026-01-28T01:33:01.439252345Z" level=info msg="StopPodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" returns successfully" Jan 28 01:33:01.480742 containerd[1622]: time="2026-01-28T01:33:01.480692932Z" level=info msg="RemovePodSandbox for \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:33:01.516344 containerd[1622]: time="2026-01-28T01:33:01.516293718Z" level=info msg="Forcibly stopping sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\"" Jan 28 01:33:01.520291 containerd[1622]: time="2026-01-28T01:33:01.520250882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rt7g9,Uid:441fbe90-529b-45d0-b9a6-f443cf214304,Namespace:kube-system,Attempt:1,} returns sandbox id \"e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35\"" Jan 28 01:33:01.537148 kubelet[2972]: E0128 01:33:01.537108 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:01.777031 containerd[1622]: time="2026-01-28T01:33:01.776250247Z" level=info msg="CreateContainer within sandbox \"e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:33:01.981034 kubelet[2972]: E0128 01:33:01.980995 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:02.102158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017304509.mount: Deactivated successfully. Jan 28 01:33:02.220874 containerd[1622]: time="2026-01-28T01:33:02.217739491Z" level=info msg="CreateContainer within sandbox \"e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ad20b5cc3d0e1acbcd13d77a3a67ed3e0899f57be34266721030475453b46d1\"" Jan 28 01:33:02.222972 containerd[1622]: time="2026-01-28T01:33:02.222111026Z" level=info msg="StartContainer for \"9ad20b5cc3d0e1acbcd13d77a3a67ed3e0899f57be34266721030475453b46d1\"" Jan 28 01:33:02.316363 kubelet[2972]: I0128 01:33:02.314278 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-556k8" podStartSLOduration=185.314251306 podStartE2EDuration="3m5.314251306s" podCreationTimestamp="2026-01-28 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:33:02.304035612 +0000 UTC m=+185.267723908" watchObservedRunningTime="2026-01-28 01:33:02.314251306 +0000 UTC m=+185.277939591" Jan 28 01:33:02.807736 systemd-networkd[1276]: cali20696a0eedb: Link DOWN Jan 28 01:33:02.807749 systemd-networkd[1276]: cali20696a0eedb: Lost carrier Jan 28 01:33:03.338789 containerd[1622]: time="2026-01-28T01:33:03.338684194Z" level=info msg="StartContainer for \"9ad20b5cc3d0e1acbcd13d77a3a67ed3e0899f57be34266721030475453b46d1\" returns successfully" Jan 28 01:33:03.382270 kubelet[2972]: E0128 01:33:03.381468 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:03.382270 kubelet[2972]: I0128 01:33:03.381800 2972 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:02.535 [WARNING][6023] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--dp6nh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a0975c98-58e0-4afd-9150-95ec5af111e8", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8830d82e9c3ecbc22157ca5ba9bbe62f3bbf65370adc42cfc47a8f84e4b20c08", Pod:"goldmane-666569f655-dp6nh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali91aaaf9e015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:02.556 [INFO][6023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:02.556 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" iface="eth0" netns="" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:02.556 [INFO][6023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:02.556 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.169 [INFO][6079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.179 [INFO][6079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.179 [INFO][6079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.294 [WARNING][6079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.295 [INFO][6079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" HandleID="k8s-pod-network.281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Workload="localhost-k8s-goldmane--666569f655--dp6nh-eth0" Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.333 [INFO][6079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:03.439745 containerd[1622]: 2026-01-28 01:33:03.398 [INFO][6023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1" Jan 28 01:33:03.439745 containerd[1622]: time="2026-01-28T01:33:03.435102553Z" level=info msg="TearDown network for sandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" successfully" Jan 28 01:33:03.575763 containerd[1622]: time="2026-01-28T01:33:03.570452759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:33:03.575763 containerd[1622]: time="2026-01-28T01:33:03.575382039Z" level=info msg="RemovePodSandbox \"281e61e14583bcfbda394358d38e2ff885b1ebb48c962cf9cede5917b29377d1\" returns successfully" Jan 28 01:33:03.578959 containerd[1622]: time="2026-01-28T01:33:03.578732018Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:02.757 [INFO][6042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:02.786 [INFO][6042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" iface="eth0" netns="/var/run/netns/cni-c7bec4ab-f74e-5741-cccc-7d7705be0286" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:02.794 [INFO][6042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" iface="eth0" netns="/var/run/netns/cni-c7bec4ab-f74e-5741-cccc-7d7705be0286" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:02.908 [INFO][6042] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" after=120.695817ms iface="eth0" netns="/var/run/netns/cni-c7bec4ab-f74e-5741-cccc-7d7705be0286" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:02.909 [INFO][6042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:02.909 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:03.390 [INFO][6088] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:03.393 [INFO][6088] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:03.393 [INFO][6088] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:03.934 [INFO][6088] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:03.940 [INFO][6088] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:04.007 [INFO][6088] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:04.099740 containerd[1622]: 2026-01-28 01:33:04.019 [INFO][6042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:33:04.118020 containerd[1622]: time="2026-01-28T01:33:04.117915929Z" level=info msg="TearDown network for sandbox \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\" successfully" Jan 28 01:33:04.118524 containerd[1622]: time="2026-01-28T01:33:04.118409754Z" level=info msg="StopPodSandbox for \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\" returns successfully" Jan 28 01:33:04.146329 containerd[1622]: time="2026-01-28T01:33:04.145123796Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:33:04.166942 systemd[1]: run-netns-cni\x2dc7bec4ab\x2df74e\x2d5741\x2dcccc\x2d7d7705be0286.mount: Deactivated successfully. Jan 28 01:33:04.481198 kubelet[2972]: E0128 01:33:04.477254 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:04.628977 kubelet[2972]: E0128 01:33:04.592879 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:04.799008 kubelet[2972]: I0128 01:33:04.761544 2972 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rt7g9" podStartSLOduration=187.761520201 podStartE2EDuration="3m7.761520201s" podCreationTimestamp="2026-01-28 01:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:33:04.719026228 +0000 UTC m=+187.682714503" watchObservedRunningTime="2026-01-28 01:33:04.761520201 +0000 UTC m=+187.725208536" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.193 [WARNING][6129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"293f11a4-1519-4e40-8e4f-23ffad2f9d2d", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b", Pod:"calico-apiserver-69686dc768-ln9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali689a03405bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.194 [INFO][6129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.194 [INFO][6129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" iface="eth0" netns="" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.194 [INFO][6129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.194 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.529 [INFO][6144] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.529 [INFO][6144] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.529 [INFO][6144] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.733 [WARNING][6144] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.734 [INFO][6144] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.819 [INFO][6144] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:04.870333 containerd[1622]: 2026-01-28 01:33:04.860 [INFO][6129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:04.890227 containerd[1622]: time="2026-01-28T01:33:04.870369205Z" level=info msg="TearDown network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" successfully" Jan 28 01:33:04.890227 containerd[1622]: time="2026-01-28T01:33:04.870407607Z" level=info msg="StopPodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" returns successfully" Jan 28 01:33:04.897995 containerd[1622]: time="2026-01-28T01:33:04.895213128Z" level=info msg="RemovePodSandbox for \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:33:04.897995 containerd[1622]: time="2026-01-28T01:33:04.895262391Z" level=info msg="Forcibly stopping sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\"" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.049 [WARNING][6152] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59c86599d9--sc97f-eth0", GenerateName:"whisker-59c86599d9-", Namespace:"calico-system", SelfLink:"", UID:"7efc6fb0-0d34-4603-98de-2c82b7e71158", ResourceVersion:"1389", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59c86599d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e", Pod:"whisker-59c86599d9-sc97f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20696a0eedb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.050 [INFO][6152] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.050 [INFO][6152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" iface="eth0" netns="" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.050 [INFO][6152] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.050 [INFO][6152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.394 [INFO][6181] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.394 [INFO][6181] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.394 [INFO][6181] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.433 [WARNING][6181] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.433 [INFO][6181] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.459 [INFO][6181] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:05.492865 containerd[1622]: 2026-01-28 01:33:05.480 [INFO][6152] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:05.498241 containerd[1622]: time="2026-01-28T01:33:05.498197453Z" level=info msg="TearDown network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" successfully" Jan 28 01:33:05.498339 containerd[1622]: time="2026-01-28T01:33:05.498317089Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" returns successfully" Jan 28 01:33:05.501977 kubelet[2972]: E0128 01:33:05.499781 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:05.697944 kubelet[2972]: I0128 01:33:05.696686 2972 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcff5\" (UniqueName: \"kubernetes.io/projected/7efc6fb0-0d34-4603-98de-2c82b7e71158-kube-api-access-zcff5\") pod \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" (UID: \"7efc6fb0-0d34-4603-98de-2c82b7e71158\") " Jan 28 01:33:05.697944 kubelet[2972]: I0128 01:33:05.696788 2972 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-backend-key-pair\") pod \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" (UID: \"7efc6fb0-0d34-4603-98de-2c82b7e71158\") " Jan 28 01:33:05.697944 kubelet[2972]: I0128 01:33:05.696827 2972 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-ca-bundle\") pod \"7efc6fb0-0d34-4603-98de-2c82b7e71158\" (UID: \"7efc6fb0-0d34-4603-98de-2c82b7e71158\") " Jan 28 01:33:05.719735 kubelet[2972]: I0128 01:33:05.719532 2972 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7efc6fb0-0d34-4603-98de-2c82b7e71158" (UID: "7efc6fb0-0d34-4603-98de-2c82b7e71158"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:33:05.754170 kubelet[2972]: I0128 01:33:05.753891 2972 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efc6fb0-0d34-4603-98de-2c82b7e71158-kube-api-access-zcff5" (OuterVolumeSpecName: "kube-api-access-zcff5") pod "7efc6fb0-0d34-4603-98de-2c82b7e71158" (UID: "7efc6fb0-0d34-4603-98de-2c82b7e71158"). InnerVolumeSpecName "kube-api-access-zcff5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.436 [WARNING][6176] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"293f11a4-1519-4e40-8e4f-23ffad2f9d2d", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6154edbae6a689c805053f917945682a253663497b1551e66c9f95bbd521dd6b", Pod:"calico-apiserver-69686dc768-ln9mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali689a03405bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.436 [INFO][6176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.437 [INFO][6176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" iface="eth0" netns="" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.437 [INFO][6176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.437 [INFO][6176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.655 [INFO][6196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.656 [INFO][6196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.656 [INFO][6196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.714 [WARNING][6196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.714 [INFO][6196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" HandleID="k8s-pod-network.3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Workload="localhost-k8s-calico--apiserver--69686dc768--ln9mw-eth0" Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.739 [INFO][6196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:05.756902 containerd[1622]: 2026-01-28 01:33:05.749 [INFO][6176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f" Jan 28 01:33:05.758706 containerd[1622]: time="2026-01-28T01:33:05.758667322Z" level=info msg="TearDown network for sandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" successfully" Jan 28 01:33:05.762796 systemd[1]: var-lib-kubelet-pods-7efc6fb0\x2d0d34\x2d4603\x2d98de\x2d2c82b7e71158-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzcff5.mount: Deactivated successfully. Jan 28 01:33:05.764345 kubelet[2972]: I0128 01:33:05.764301 2972 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7efc6fb0-0d34-4603-98de-2c82b7e71158" (UID: "7efc6fb0-0d34-4603-98de-2c82b7e71158"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:33:05.776425 containerd[1622]: time="2026-01-28T01:33:05.776380813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:33:05.776780 containerd[1622]: time="2026-01-28T01:33:05.776752237Z" level=info msg="RemovePodSandbox \"3b18946924df3cfb86d7be1128430988f34ae81e6921d191c954b5541dc6872f\" returns successfully" Jan 28 01:33:05.778223 containerd[1622]: time="2026-01-28T01:33:05.778193236Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:33:05.793787 systemd[1]: var-lib-kubelet-pods-7efc6fb0\x2d0d34\x2d4603\x2d98de\x2d2c82b7e71158-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:33:05.799953 kubelet[2972]: I0128 01:33:05.797256 2972 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 01:33:05.799953 kubelet[2972]: I0128 01:33:05.797303 2972 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efc6fb0-0d34-4603-98de-2c82b7e71158-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 01:33:05.799953 kubelet[2972]: I0128 01:33:05.797317 2972 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zcff5\" (UniqueName: \"kubernetes.io/projected/7efc6fb0-0d34-4603-98de-2c82b7e71158-kube-api-access-zcff5\") on node \"localhost\" DevicePath \"\"" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:05.974 [WARNING][6217] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gwj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4b5e90d-930c-4b60-ab0a-ec73967e82da", ResourceVersion:"1330", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f", Pod:"csi-node-driver-9gwj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a373fb304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:05.979 [INFO][6217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:05.979 [INFO][6217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" iface="eth0" netns="" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:05.979 [INFO][6217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:05.979 [INFO][6217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.129 [INFO][6226] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.129 [INFO][6226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.129 [INFO][6226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.172 [WARNING][6226] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.172 [INFO][6226] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.181 [INFO][6226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:06.224302 containerd[1622]: 2026-01-28 01:33:06.199 [INFO][6217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:06.224302 containerd[1622]: time="2026-01-28T01:33:06.223399403Z" level=info msg="TearDown network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" successfully" Jan 28 01:33:06.224302 containerd[1622]: time="2026-01-28T01:33:06.223436013Z" level=info msg="StopPodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" returns successfully" Jan 28 01:33:06.228089 containerd[1622]: time="2026-01-28T01:33:06.228057323Z" level=info msg="RemovePodSandbox for \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:33:06.228462 containerd[1622]: time="2026-01-28T01:33:06.228208119Z" level=info msg="Forcibly stopping sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\"" Jan 28 01:33:06.536403 kubelet[2972]: E0128 01:33:06.536228 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:06.617208 containerd[1622]: time="2026-01-28T01:33:06.613970334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:33:06.919005 containerd[1622]: time="2026-01-28T01:33:06.914497193Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:06.934768 containerd[1622]: time="2026-01-28T01:33:06.934703587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:33:06.935258 containerd[1622]: time="2026-01-28T01:33:06.935193204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:33:06.942588 kubelet[2972]: E0128 01:33:06.940369 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:06.942588 kubelet[2972]: E0128 01:33:06.940436 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:06.942588 kubelet[2972]: E0128 01:33:06.940764 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx52m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:06.942588 kubelet[2972]: E0128 01:33:06.942740 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:06.591 [WARNING][6249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gwj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4b5e90d-930c-4b60-ab0a-ec73967e82da", ResourceVersion:"1330", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60c67a5fc7a6ae5863958272ac2cb2d1f4b671b7ec8f20fc2e19502cfaba46f", Pod:"csi-node-driver-9gwj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a373fb304", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:06.592 [INFO][6249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:06.592 [INFO][6249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" iface="eth0" netns="" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:06.592 [INFO][6249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:06.592 [INFO][6249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.034 [INFO][6259] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.035 [INFO][6259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.035 [INFO][6259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.101 [WARNING][6259] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.108 [INFO][6259] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" HandleID="k8s-pod-network.edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Workload="localhost-k8s-csi--node--driver--9gwj5-eth0" Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.138 [INFO][6259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:07.180191 containerd[1622]: 2026-01-28 01:33:07.150 [INFO][6249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57" Jan 28 01:33:07.180191 containerd[1622]: time="2026-01-28T01:33:07.179456258Z" level=info msg="TearDown network for sandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" successfully" Jan 28 01:33:07.241818 containerd[1622]: time="2026-01-28T01:33:07.241761924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:33:07.242787 containerd[1622]: time="2026-01-28T01:33:07.242751169Z" level=info msg="RemovePodSandbox \"edac66d0f238e81f2924a196eb44a383506fb7185e345359d204e83f2b696c57\" returns successfully" Jan 28 01:33:07.259068 containerd[1622]: time="2026-01-28T01:33:07.258986671Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:33:07.493411 kubelet[2972]: I0128 01:33:07.491487 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nqlz\" (UniqueName: \"kubernetes.io/projected/5859ba2a-a016-4346-9bde-cada03fa1141-kube-api-access-4nqlz\") pod \"whisker-5f975b9dd9-g8mzf\" (UID: \"5859ba2a-a016-4346-9bde-cada03fa1141\") " pod="calico-system/whisker-5f975b9dd9-g8mzf" Jan 28 01:33:07.493411 kubelet[2972]: I0128 01:33:07.491726 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5859ba2a-a016-4346-9bde-cada03fa1141-whisker-backend-key-pair\") pod \"whisker-5f975b9dd9-g8mzf\" (UID: \"5859ba2a-a016-4346-9bde-cada03fa1141\") " pod="calico-system/whisker-5f975b9dd9-g8mzf" Jan 28 01:33:07.493411 kubelet[2972]: I0128 01:33:07.491773 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5859ba2a-a016-4346-9bde-cada03fa1141-whisker-ca-bundle\") pod \"whisker-5f975b9dd9-g8mzf\" (UID: \"5859ba2a-a016-4346-9bde-cada03fa1141\") " pod="calico-system/whisker-5f975b9dd9-g8mzf" Jan 28 01:33:07.735434 containerd[1622]: time="2026-01-28T01:33:07.735380752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f975b9dd9-g8mzf,Uid:5859ba2a-a016-4346-9bde-cada03fa1141,Namespace:calico-system,Attempt:0,}" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:07.801 [WARNING][6278] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"25dca920-f21c-49d2-adf9-753622c450d8", ResourceVersion:"1428", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19", Pod:"calico-apiserver-69686dc768-5qb5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54950a0a884", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:07.801 [INFO][6278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:07.801 [INFO][6278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" iface="eth0" netns="" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:07.804 [INFO][6278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:07.804 [INFO][6278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.029 [INFO][6289] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.029 [INFO][6289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.029 [INFO][6289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.093 [WARNING][6289] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.093 [INFO][6289] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.100 [INFO][6289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:08.127805 containerd[1622]: 2026-01-28 01:33:08.118 [INFO][6278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:08.127805 containerd[1622]: time="2026-01-28T01:33:08.124357655Z" level=info msg="TearDown network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" successfully" Jan 28 01:33:08.127805 containerd[1622]: time="2026-01-28T01:33:08.124393994Z" level=info msg="StopPodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" returns successfully" Jan 28 01:33:08.127805 containerd[1622]: time="2026-01-28T01:33:08.125516001Z" level=info msg="RemovePodSandbox for \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:33:08.127805 containerd[1622]: time="2026-01-28T01:33:08.125550676Z" level=info msg="Forcibly stopping sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\"" Jan 28 01:33:08.611777 containerd[1622]: time="2026-01-28T01:33:08.611589646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:33:08.667137 kubelet[2972]: I0128 01:33:08.657891 2972 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efc6fb0-0d34-4603-98de-2c82b7e71158" path="/var/lib/kubelet/pods/7efc6fb0-0d34-4603-98de-2c82b7e71158/volumes" Jan 28 01:33:08.767012 containerd[1622]: time="2026-01-28T01:33:08.764817422Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:08.787253 containerd[1622]: time="2026-01-28T01:33:08.787182509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:33:08.789196 containerd[1622]: time="2026-01-28T01:33:08.787448603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:33:08.798094 kubelet[2972]: E0128 01:33:08.794553 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:33:08.801075 kubelet[2972]: E0128 01:33:08.799567 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:33:08.801075 kubelet[2972]: E0128 01:33:08.800063 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:08.806238 containerd[1622]: time="2026-01-28T01:33:08.806056369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:33:08.979346 containerd[1622]: time="2026-01-28T01:33:08.974688663Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:08.993251 containerd[1622]: time="2026-01-28T01:33:08.993017190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:33:08.993251 containerd[1622]: time="2026-01-28T01:33:08.993177354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:33:08.999271 kubelet[2972]: E0128 01:33:08.996244 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:33:08.999271 kubelet[2972]: E0128 01:33:08.996321 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:33:08.999271 kubelet[2972]: E0128 01:33:08.997873 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:09.016938 kubelet[2972]: E0128 01:33:09.002002 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:33:09.121898 systemd-networkd[1276]: cali198e6080f02: Link UP Jan 28 01:33:09.132368 systemd-networkd[1276]: cali198e6080f02: Gained carrier Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.140 [INFO][6294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0 whisker-5f975b9dd9- calico-system 5859ba2a-a016-4346-9bde-cada03fa1141 1440 0 2026-01-28 01:33:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f975b9dd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f975b9dd9-g8mzf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali198e6080f02 [] [] }} ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.140 [INFO][6294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.619 [INFO][6322] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" HandleID="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Workload="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.619 [INFO][6322] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" HandleID="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Workload="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee4d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f975b9dd9-g8mzf", "timestamp":"2026-01-28 01:33:08.619129012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.619 [INFO][6322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.619 [INFO][6322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.619 [INFO][6322] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.705 [INFO][6322] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.743 [INFO][6322] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.774 [INFO][6322] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.790 [INFO][6322] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.843 [INFO][6322] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.849 [INFO][6322] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.874 [INFO][6322] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121 Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:08.952 [INFO][6322] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:09.023 [INFO][6322] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:09.025 [INFO][6322] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" host="localhost" Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:09.027 [INFO][6322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:09.293750 containerd[1622]: 2026-01-28 01:33:09.027 [INFO][6322] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" HandleID="k8s-pod-network.4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Workload="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.308289 containerd[1622]: 2026-01-28 01:33:09.058 [INFO][6294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0", GenerateName:"whisker-5f975b9dd9-", Namespace:"calico-system", SelfLink:"", UID:"5859ba2a-a016-4346-9bde-cada03fa1141", ResourceVersion:"1440", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f975b9dd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f975b9dd9-g8mzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali198e6080f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:09.308289 containerd[1622]: 2026-01-28 01:33:09.058 [INFO][6294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.308289 containerd[1622]: 2026-01-28 01:33:09.058 [INFO][6294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali198e6080f02 ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.308289 containerd[1622]: 2026-01-28 01:33:09.139 [INFO][6294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.308289 containerd[1622]: 2026-01-28 01:33:09.162 [INFO][6294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0", GenerateName:"whisker-5f975b9dd9-", Namespace:"calico-system", SelfLink:"", UID:"5859ba2a-a016-4346-9bde-cada03fa1141", ResourceVersion:"1440", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f975b9dd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121", Pod:"whisker-5f975b9dd9-g8mzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali198e6080f02", MAC:"52:12:7d:87:26:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:09.308289 containerd[1622]: 2026-01-28 01:33:09.272 [INFO][6294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121" Namespace="calico-system" Pod="whisker-5f975b9dd9-g8mzf" WorkloadEndpoint="localhost-k8s-whisker--5f975b9dd9--g8mzf-eth0" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.731 [WARNING][6326] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0", GenerateName:"calico-apiserver-69686dc768-", Namespace:"calico-apiserver", SelfLink:"", UID:"25dca920-f21c-49d2-adf9-753622c450d8", ResourceVersion:"1428", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 30, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69686dc768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8c50d9b477fff574f334e2df2ef487915fc802f5b00cafde18de6f5953e1f19", Pod:"calico-apiserver-69686dc768-5qb5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54950a0a884", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.731 [INFO][6326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.731 [INFO][6326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" iface="eth0" netns="" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.731 [INFO][6326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.733 [INFO][6326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.902 [INFO][6341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:08.903 [INFO][6341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:09.028 [INFO][6341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:09.174 [WARNING][6341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:09.174 [INFO][6341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" HandleID="k8s-pod-network.7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Workload="localhost-k8s-calico--apiserver--69686dc768--5qb5l-eth0" Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:09.230 [INFO][6341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:09.323419 containerd[1622]: 2026-01-28 01:33:09.298 [INFO][6326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c" Jan 28 01:33:09.324245 containerd[1622]: time="2026-01-28T01:33:09.324205256Z" level=info msg="TearDown network for sandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" successfully" Jan 28 01:33:09.334918 containerd[1622]: time="2026-01-28T01:33:09.334856429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:33:09.335558 containerd[1622]: time="2026-01-28T01:33:09.335423091Z" level=info msg="RemovePodSandbox \"7327ecbef589ad24d63996aae9a837d458573fbffa82a8fc3e4c0157385cd66c\" returns successfully" Jan 28 01:33:09.338240 containerd[1622]: time="2026-01-28T01:33:09.338049773Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:33:09.408068 containerd[1622]: time="2026-01-28T01:33:09.407919114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:33:09.409908 containerd[1622]: time="2026-01-28T01:33:09.409851495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:33:09.410237 containerd[1622]: time="2026-01-28T01:33:09.410101980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:33:09.413027 containerd[1622]: time="2026-01-28T01:33:09.410942056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:33:09.521978 systemd[1]: run-containerd-runc-k8s.io-4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121-runc.829Ymq.mount: Deactivated successfully. Jan 28 01:33:09.572676 containerd[1622]: time="2026-01-28T01:33:09.564585994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:33:09.649686 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:33:09.706081 containerd[1622]: time="2026-01-28T01:33:09.705976609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:09.735958 containerd[1622]: time="2026-01-28T01:33:09.735003238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:33:09.737429 containerd[1622]: time="2026-01-28T01:33:09.736785395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:33:09.741499 kubelet[2972]: E0128 01:33:09.739396 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:09.741499 kubelet[2972]: E0128 01:33:09.740012 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:09.741499 kubelet[2972]: E0128 01:33:09.740532 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4s7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:09.749032 kubelet[2972]: E0128 01:33:09.742115 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.552 [WARNING][6379] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.552 [INFO][6379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.552 [INFO][6379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" iface="eth0" netns="" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.552 [INFO][6379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.552 [INFO][6379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.679 [INFO][6414] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.687 [INFO][6414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.687 [INFO][6414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.731 [WARNING][6414] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.731 [INFO][6414] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.752 [INFO][6414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:09.785236 containerd[1622]: 2026-01-28 01:33:09.775 [INFO][6379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:09.785236 containerd[1622]: time="2026-01-28T01:33:09.785075429Z" level=info msg="TearDown network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" successfully" Jan 28 01:33:09.785236 containerd[1622]: time="2026-01-28T01:33:09.785108992Z" level=info msg="StopPodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" returns successfully" Jan 28 01:33:09.792424 containerd[1622]: time="2026-01-28T01:33:09.791852666Z" level=info msg="RemovePodSandbox for \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:33:09.792424 containerd[1622]: time="2026-01-28T01:33:09.791901539Z" level=info msg="Forcibly stopping sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\"" Jan 28 01:33:09.929282 containerd[1622]: time="2026-01-28T01:33:09.929010041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f975b9dd9-g8mzf,Uid:5859ba2a-a016-4346-9bde-cada03fa1141,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b690f302a550860cc2d165aac9c96188f0eba027fcba33b41dcce4051fcc121\"" Jan 28 01:33:09.955212 containerd[1622]: time="2026-01-28T01:33:09.942406532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:33:10.064724 containerd[1622]: time="2026-01-28T01:33:10.063920391Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:10.076151 containerd[1622]: time="2026-01-28T01:33:10.074170934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:33:10.076151 containerd[1622]: time="2026-01-28T01:33:10.075833373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:33:10.082195 kubelet[2972]: E0128 01:33:10.077140 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:33:10.082195 kubelet[2972]: E0128 01:33:10.077209 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:33:10.082195 kubelet[2972]: E0128 01:33:10.077342 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:10.117173 containerd[1622]: time="2026-01-28T01:33:10.108409970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:33:10.245581 containerd[1622]: time="2026-01-28T01:33:10.244021435Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:10.261147 containerd[1622]: time="2026-01-28T01:33:10.255102891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:33:10.261147 containerd[1622]: time="2026-01-28T01:33:10.255237317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:33:10.266966 kubelet[2972]: E0128 01:33:10.259705 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:33:10.266966 kubelet[2972]: E0128 01:33:10.263239 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:33:10.292565 kubelet[2972]: E0128 01:33:10.291701 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:10.302225 kubelet[2972]: E0128 01:33:10.297236 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.077 [WARNING][6431] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.077 [INFO][6431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.077 [INFO][6431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" iface="eth0" netns="" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.077 [INFO][6431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.077 [INFO][6431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.326 [INFO][6446] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.327 [INFO][6446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.327 [INFO][6446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.406 [WARNING][6446] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.406 [INFO][6446] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" HandleID="k8s-pod-network.cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.422 [INFO][6446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:10.471239 containerd[1622]: 2026-01-28 01:33:10.453 [INFO][6431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889" Jan 28 01:33:10.472196 containerd[1622]: time="2026-01-28T01:33:10.472055747Z" level=info msg="TearDown network for sandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" successfully" Jan 28 01:33:10.494511 containerd[1622]: time="2026-01-28T01:33:10.493705377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:33:10.494511 containerd[1622]: time="2026-01-28T01:33:10.493899745Z" level=info msg="RemovePodSandbox \"cf028d0caa07571b66918b3d18c759d36c8806eed259ceca43ff6bd2642fb889\" returns successfully" Jan 28 01:33:10.494883 containerd[1622]: time="2026-01-28T01:33:10.494738044Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:33:10.757702 kubelet[2972]: E0128 01:33:10.757354 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:33:10.798814 systemd-networkd[1276]: cali198e6080f02: Gained IPv6LL Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:10.785 [WARNING][6462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--556k8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c7adaa55-8214-45ce-9d9c-4b2fe100270c", ResourceVersion:"1393", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a", Pod:"coredns-668d6bf9bc-556k8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib067567a374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:10.794 [INFO][6462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:10.794 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" iface="eth0" netns="" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:10.794 [INFO][6462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:10.794 [INFO][6462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.015 [INFO][6470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.015 [INFO][6470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.015 [INFO][6470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.042 [WARNING][6470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.042 [INFO][6470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.073 [INFO][6470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:11.110108 containerd[1622]: 2026-01-28 01:33:11.100 [INFO][6462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.114130 containerd[1622]: time="2026-01-28T01:33:11.110756341Z" level=info msg="TearDown network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" successfully" Jan 28 01:33:11.114130 containerd[1622]: time="2026-01-28T01:33:11.110853115Z" level=info msg="StopPodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" returns successfully" Jan 28 01:33:11.114130 containerd[1622]: time="2026-01-28T01:33:11.112072806Z" level=info msg="RemovePodSandbox for \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:33:11.114130 containerd[1622]: time="2026-01-28T01:33:11.112108734Z" level=info msg="Forcibly stopping sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\"" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.350 [WARNING][6495] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--556k8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c7adaa55-8214-45ce-9d9c-4b2fe100270c", ResourceVersion:"1393", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e07e8375bf47147c3fd42f55d239838337eb81af83fff1c190ca3476bec7eb0a", Pod:"coredns-668d6bf9bc-556k8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib067567a374", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.350 [INFO][6495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.350 [INFO][6495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" iface="eth0" netns="" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.350 [INFO][6495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.350 [INFO][6495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.551 [INFO][6504] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.557 [INFO][6504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.557 [INFO][6504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.622 [WARNING][6504] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.622 [INFO][6504] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" HandleID="k8s-pod-network.7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Workload="localhost-k8s-coredns--668d6bf9bc--556k8-eth0" Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.633 [INFO][6504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:33:11.677364 containerd[1622]: 2026-01-28 01:33:11.653 [INFO][6495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657" Jan 28 01:33:11.677364 containerd[1622]: time="2026-01-28T01:33:11.665254760Z" level=info msg="TearDown network for sandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" successfully" Jan 28 01:33:11.770516 containerd[1622]: time="2026-01-28T01:33:11.766334708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:33:11.770516 containerd[1622]: time="2026-01-28T01:33:11.766427154Z" level=info msg="RemovePodSandbox \"7c786dc63e4f23bc8a47f492bd76c49b10a386c12be949660a89b15ff8a01657\" returns successfully" Jan 28 01:33:11.771353 kubelet[2972]: E0128 01:33:11.771121 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:33:12.657575 containerd[1622]: time="2026-01-28T01:33:12.657313428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:33:12.803966 containerd[1622]: time="2026-01-28T01:33:12.803537431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:12.818444 containerd[1622]: time="2026-01-28T01:33:12.814766999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:33:12.818444 containerd[1622]: time="2026-01-28T01:33:12.815029528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:33:12.818873 kubelet[2972]: E0128 01:33:12.815242 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:33:12.818873 kubelet[2972]: E0128 01:33:12.815321 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:33:12.818873 kubelet[2972]: E0128 01:33:12.815511 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:12.818873 kubelet[2972]: E0128 01:33:12.817967 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:33:15.369251 kubelet[2972]: E0128 01:33:15.369206 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:16.583006 containerd[1622]: time="2026-01-28T01:33:16.572439356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:33:16.773137 containerd[1622]: time="2026-01-28T01:33:16.768711676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:16.785264 containerd[1622]: time="2026-01-28T01:33:16.782274329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:33:16.785264 containerd[1622]: time="2026-01-28T01:33:16.782403943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:33:16.802509 kubelet[2972]: E0128 01:33:16.790454 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:33:16.802509 kubelet[2972]: E0128 01:33:16.794727 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:33:16.802509 kubelet[2972]: E0128 01:33:16.794920 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkb45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:16.802509 kubelet[2972]: E0128 01:33:16.799387 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:33:17.558911 kubelet[2972]: E0128 01:33:17.558477 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:33:20.605359 kubelet[2972]: E0128 01:33:20.601299 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:33:20.613269 kubelet[2972]: E0128 01:33:20.609007 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:33:22.631159 containerd[1622]: time="2026-01-28T01:33:22.628534154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:33:22.828950 containerd[1622]: time="2026-01-28T01:33:22.827578804Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:22.840668 containerd[1622]: time="2026-01-28T01:33:22.838839636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:33:22.840668 containerd[1622]: time="2026-01-28T01:33:22.838979731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:33:22.842409 kubelet[2972]: E0128 01:33:22.839997 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:33:22.842409 kubelet[2972]: E0128 01:33:22.840144 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:33:22.842409 kubelet[2972]: E0128 01:33:22.840712 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:22.880488 containerd[1622]: time="2026-01-28T01:33:22.851058713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:33:23.075453 containerd[1622]: time="2026-01-28T01:33:23.074574818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:23.102716 containerd[1622]: time="2026-01-28T01:33:23.099579845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:33:23.102716 containerd[1622]: time="2026-01-28T01:33:23.100014228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:33:23.105287 kubelet[2972]: E0128 01:33:23.105170 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:33:23.106012 kubelet[2972]: E0128 01:33:23.105713 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:33:23.108086 kubelet[2972]: E0128 01:33:23.106774 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:23.114240 kubelet[2972]: E0128 01:33:23.110029 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:33:27.559827 kubelet[2972]: E0128 01:33:27.559776 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:33:31.312829 containerd[1622]: time="2026-01-28T01:33:31.312684128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:33:31.414280 containerd[1622]: time="2026-01-28T01:33:31.413812013Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:31.423050 containerd[1622]: time="2026-01-28T01:33:31.422872074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:33:31.423050 containerd[1622]: time="2026-01-28T01:33:31.423034966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:33:31.423416 kubelet[2972]: E0128 01:33:31.423238 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:31.423416 kubelet[2972]: E0128 01:33:31.423300 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:31.424906 kubelet[2972]: E0128 01:33:31.423730 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx52m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:31.425109 containerd[1622]: time="2026-01-28T01:33:31.424211406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:33:31.425423 kubelet[2972]: E0128 01:33:31.425213 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:33:31.566700 containerd[1622]: time="2026-01-28T01:33:31.565532586Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:31.576462 containerd[1622]: time="2026-01-28T01:33:31.575434948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:33:31.576462 containerd[1622]: time="2026-01-28T01:33:31.575574366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:33:31.576721 kubelet[2972]: E0128 01:33:31.575820 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:33:31.576721 kubelet[2972]: E0128 01:33:31.575881 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:33:31.576721 kubelet[2972]: E0128 01:33:31.576145 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:31.577007 containerd[1622]: time="2026-01-28T01:33:31.576957418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:33:31.728413 containerd[1622]: time="2026-01-28T01:33:31.725823776Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:31.745775 containerd[1622]: time="2026-01-28T01:33:31.745136834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:33:31.745775 containerd[1622]: time="2026-01-28T01:33:31.745276381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:33:31.746123 kubelet[2972]: E0128 01:33:31.745913 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:31.746123 kubelet[2972]: E0128 01:33:31.745984 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:33:31.747550 kubelet[2972]: E0128 01:33:31.746242 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4s7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:31.758939 kubelet[2972]: E0128 01:33:31.755499 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:33:31.759119 containerd[1622]: time="2026-01-28T01:33:31.755916003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:33:31.908258 containerd[1622]: time="2026-01-28T01:33:31.895685829Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:31.909166 containerd[1622]: time="2026-01-28T01:33:31.909071903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:33:31.912490 containerd[1622]: time="2026-01-28T01:33:31.912408299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:33:31.922844 kubelet[2972]: E0128 01:33:31.922732 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:33:31.928535 kubelet[2972]: E0128 01:33:31.928206 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:33:31.928535 kubelet[2972]: E0128 01:33:31.928461 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:31.930800 kubelet[2972]: E0128 01:33:31.930552 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:33:32.581263 kubelet[2972]: E0128 01:33:32.581208 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:33:37.573576 kubelet[2972]: E0128 01:33:37.573479 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:33:41.961856 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:37740.service - OpenSSH per-connection server daemon (10.0.0.1:37740). Jan 28 01:33:42.268433 sshd[6557]: Accepted publickey for core from 10.0.0.1 port 37740 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:33:42.274829 sshd[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:42.316153 systemd-logind[1612]: New session 10 of user core. Jan 28 01:33:42.333175 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:33:42.569462 containerd[1622]: time="2026-01-28T01:33:42.568193582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:33:42.755064 containerd[1622]: time="2026-01-28T01:33:42.754986541Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:42.761051 containerd[1622]: time="2026-01-28T01:33:42.757262792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:33:42.761051 containerd[1622]: time="2026-01-28T01:33:42.757373407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:33:42.764734 kubelet[2972]: E0128 01:33:42.761764 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:33:42.764734 kubelet[2972]: E0128 01:33:42.761834 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:33:42.764734 kubelet[2972]: E0128 01:33:42.762013 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:42.764734 kubelet[2972]: E0128 01:33:42.763692 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:33:43.046800 sshd[6557]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:43.059899 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:37740.service: Deactivated successfully. Jan 28 01:33:43.074343 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:33:43.079212 systemd-logind[1612]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:33:43.082050 systemd-logind[1612]: Removed session 10. Jan 28 01:33:43.556406 kubelet[2972]: E0128 01:33:43.555725 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:45.577066 containerd[1622]: time="2026-01-28T01:33:45.576778720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:33:45.705138 containerd[1622]: time="2026-01-28T01:33:45.705058442Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:45.709071 containerd[1622]: time="2026-01-28T01:33:45.708925406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:33:45.709197 containerd[1622]: time="2026-01-28T01:33:45.709119797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:33:45.712205 kubelet[2972]: E0128 01:33:45.711687 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:33:45.712205 kubelet[2972]: E0128 01:33:45.711836 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:33:45.712205 kubelet[2972]: E0128 01:33:45.712133 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkb45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:45.715354 kubelet[2972]: E0128 01:33:45.713991 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:33:46.579417 kubelet[2972]: E0128 01:33:46.574543 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:33:46.579417 kubelet[2972]: E0128 01:33:46.577022 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:33:46.580187 kubelet[2972]: E0128 01:33:46.580144 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:33:48.123794 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:35890.service - OpenSSH per-connection server daemon (10.0.0.1:35890). Jan 28 01:33:48.378060 sshd[6615]: Accepted publickey for core from 10.0.0.1 port 35890 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:33:48.376789 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:48.402443 systemd-logind[1612]: New session 11 of user core. Jan 28 01:33:48.412364 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:33:48.901673 sshd[6615]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:48.915038 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:35890.service: Deactivated successfully. Jan 28 01:33:48.927169 systemd-logind[1612]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:33:48.929810 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:33:48.931497 systemd-logind[1612]: Removed session 11. Jan 28 01:33:50.575727 kubelet[2972]: E0128 01:33:50.574927 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:51.567446 containerd[1622]: time="2026-01-28T01:33:51.562463049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:33:51.680134 containerd[1622]: time="2026-01-28T01:33:51.679271178Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:51.790095 containerd[1622]: time="2026-01-28T01:33:51.773873693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:33:51.790095 containerd[1622]: time="2026-01-28T01:33:51.773972931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:33:51.794873 kubelet[2972]: E0128 01:33:51.775048 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:33:51.794873 kubelet[2972]: E0128 01:33:51.775121 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:33:51.794873 kubelet[2972]: E0128 01:33:51.775802 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:51.797133 containerd[1622]: time="2026-01-28T01:33:51.792425811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:33:52.029389 containerd[1622]: time="2026-01-28T01:33:52.011888674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:33:52.029389 containerd[1622]: time="2026-01-28T01:33:52.025719542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:33:52.029389 containerd[1622]: time="2026-01-28T01:33:52.025892503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:33:52.029830 kubelet[2972]: E0128 01:33:52.026122 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:33:52.029830 kubelet[2972]: E0128 01:33:52.026241 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:33:52.029830 kubelet[2972]: E0128 01:33:52.026394 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:33:52.029830 kubelet[2972]: E0128 01:33:52.028056 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:33:53.936733 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:43588.service - OpenSSH per-connection server daemon (10.0.0.1:43588). Jan 28 01:33:53.996804 sshd[6634]: Accepted publickey for core from 10.0.0.1 port 43588 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:33:54.011865 sshd[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:33:54.031120 systemd-logind[1612]: New session 12 of user core. Jan 28 01:33:54.040302 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:33:54.557100 sshd[6634]: pam_unix(sshd:session): session closed for user core Jan 28 01:33:54.564579 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:43588.service: Deactivated successfully. Jan 28 01:33:54.573926 systemd-logind[1612]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:33:54.575456 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:33:54.585341 systemd-logind[1612]: Removed session 12. Jan 28 01:33:57.559932 kubelet[2972]: E0128 01:33:57.557458 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:33:57.584830 kubelet[2972]: E0128 01:33:57.565408 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:33:58.561278 kubelet[2972]: E0128 01:33:58.560584 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:33:58.574899 kubelet[2972]: E0128 01:33:58.574809 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:33:59.616082 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:43590.service - OpenSSH per-connection server daemon (10.0.0.1:43590). Jan 28 01:34:00.042459 sshd[6652]: Accepted publickey for core from 10.0.0.1 port 43590 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:00.051670 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:00.064542 systemd-logind[1612]: New session 13 of user core. Jan 28 01:34:00.090770 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:34:00.726751 kubelet[2972]: E0128 01:34:00.707058 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:34:01.124963 sshd[6652]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:01.150256 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:43590.service: Deactivated successfully. Jan 28 01:34:01.172869 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:34:01.181271 systemd-logind[1612]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:34:01.191154 systemd-logind[1612]: Removed session 13. Jan 28 01:34:01.573456 kubelet[2972]: E0128 01:34:01.573407 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:34:04.558319 kubelet[2972]: E0128 01:34:04.556066 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:05.559678 kubelet[2972]: E0128 01:34:05.559390 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:34:06.185102 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:40040.service - OpenSSH per-connection server daemon (10.0.0.1:40040). Jan 28 01:34:06.459723 sshd[6670]: Accepted publickey for core from 10.0.0.1 port 40040 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:06.463075 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:06.495466 systemd-logind[1612]: New session 14 of user core. Jan 28 01:34:06.503010 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:34:07.144452 sshd[6670]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:07.172332 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:40040.service: Deactivated successfully. Jan 28 01:34:07.185678 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:34:07.185991 systemd-logind[1612]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:34:07.189715 systemd-logind[1612]: Removed session 14. Jan 28 01:34:11.589876 kubelet[2972]: E0128 01:34:11.589491 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:34:11.594957 kubelet[2972]: E0128 01:34:11.592322 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:34:11.807354 containerd[1622]: time="2026-01-28T01:34:11.807160942Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:34:12.285919 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:40044.service - OpenSSH per-connection server daemon (10.0.0.1:40044). Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.125 [WARNING][6701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441fbe90-529b-45d0-b9a6-f443cf214304", ResourceVersion:"1411", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35", Pod:"coredns-668d6bf9bc-rt7g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59b88363bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.127 [INFO][6701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.127 [INFO][6701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" iface="eth0" netns="" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.127 [INFO][6701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.127 [INFO][6701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.257 [INFO][6710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.262 [INFO][6710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.264 [INFO][6710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.396 [WARNING][6710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.400 [INFO][6710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.428 [INFO][6710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:34:12.473589 containerd[1622]: 2026-01-28 01:34:12.463 [INFO][6701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:12.473589 containerd[1622]: time="2026-01-28T01:34:12.470389442Z" level=info msg="TearDown network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" successfully" Jan 28 01:34:12.473589 containerd[1622]: time="2026-01-28T01:34:12.470423104Z" level=info msg="StopPodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" returns successfully" Jan 28 01:34:12.474444 containerd[1622]: time="2026-01-28T01:34:12.474036245Z" level=info msg="RemovePodSandbox for \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:34:12.474444 containerd[1622]: time="2026-01-28T01:34:12.474079456Z" level=info msg="Forcibly stopping sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\"" Jan 28 01:34:12.568511 sshd[6716]: Accepted publickey for core from 10.0.0.1 port 40044 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:12.572134 sshd[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:12.579790 systemd-logind[1612]: New session 15 of user core. Jan 28 01:34:12.597458 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:12.737 [WARNING][6728] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441fbe90-529b-45d0-b9a6-f443cf214304", ResourceVersion:"1411", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8a2a2ac989c4856beaa181f940c414b9ae870f071025768c6b6cfc62b68fc35", Pod:"coredns-668d6bf9bc-rt7g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59b88363bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:12.738 [INFO][6728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:12.738 [INFO][6728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" iface="eth0" netns="" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:12.738 [INFO][6728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:12.738 [INFO][6728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.135 [INFO][6746] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.135 [INFO][6746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.136 [INFO][6746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.232 [WARNING][6746] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.232 [INFO][6746] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" HandleID="k8s-pod-network.d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Workload="localhost-k8s-coredns--668d6bf9bc--rt7g9-eth0" Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.285 [INFO][6746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:34:13.345068 containerd[1622]: 2026-01-28 01:34:13.311 [INFO][6728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3" Jan 28 01:34:13.372766 containerd[1622]: time="2026-01-28T01:34:13.369773576Z" level=info msg="TearDown network for sandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" successfully" Jan 28 01:34:13.384683 sshd[6716]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:13.397818 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:40044.service: Deactivated successfully. Jan 28 01:34:13.410550 containerd[1622]: time="2026-01-28T01:34:13.405148972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:34:13.410550 containerd[1622]: time="2026-01-28T01:34:13.405696752Z" level=info msg="RemovePodSandbox \"d64744b7d1a817b336966e81c8b12012faaa582a62c3c0cad401c91a68534ef3\" returns successfully" Jan 28 01:34:13.410550 containerd[1622]: time="2026-01-28T01:34:13.409963340Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:34:13.415463 systemd-logind[1612]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:34:13.418556 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:34:13.423757 systemd-logind[1612]: Removed session 15. Jan 28 01:34:13.670726 containerd[1622]: time="2026-01-28T01:34:13.598144546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:34:13.817317 containerd[1622]: time="2026-01-28T01:34:13.815747112Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:13.832267 containerd[1622]: time="2026-01-28T01:34:13.831875737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:34:13.832850 containerd[1622]: time="2026-01-28T01:34:13.832524986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:34:13.850884 kubelet[2972]: E0128 01:34:13.833158 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:34:13.854523 kubelet[2972]: E0128 01:34:13.851563 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:34:13.854523 kubelet[2972]: E0128 01:34:13.854442 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4s7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:13.857287 kubelet[2972]: E0128 01:34:13.855742 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.568 [WARNING][6767] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0", GenerateName:"calico-kube-controllers-f96f445cb-", Namespace:"calico-system", SelfLink:"", UID:"7b83327f-83d8-4d0b-8be8-e67980a37b46", ResourceVersion:"1781", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f96f445cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f", Pod:"calico-kube-controllers-f96f445cb-js8kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia268763d958", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.569 [INFO][6767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.569 [INFO][6767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" iface="eth0" netns="" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.569 [INFO][6767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.569 [INFO][6767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.859 [INFO][6777] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.860 [INFO][6777] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.860 [INFO][6777] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.915 [WARNING][6777] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.927 [INFO][6777] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:13.981 [INFO][6777] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:34:14.058379 containerd[1622]: 2026-01-28 01:34:14.015 [INFO][6767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:14.058379 containerd[1622]: time="2026-01-28T01:34:14.058123599Z" level=info msg="TearDown network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" successfully" Jan 28 01:34:14.058379 containerd[1622]: time="2026-01-28T01:34:14.058158935Z" level=info msg="StopPodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" returns successfully" Jan 28 01:34:14.063873 containerd[1622]: time="2026-01-28T01:34:14.063831172Z" level=info msg="RemovePodSandbox for \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:34:14.064527 containerd[1622]: time="2026-01-28T01:34:14.064377541Z" level=info msg="Forcibly stopping sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\"" Jan 28 01:34:14.586834 containerd[1622]: time="2026-01-28T01:34:14.582004048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:34:14.880569 containerd[1622]: time="2026-01-28T01:34:14.868265409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:14.893822 containerd[1622]: time="2026-01-28T01:34:14.891462726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:34:14.893822 containerd[1622]: time="2026-01-28T01:34:14.891704076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:34:14.894020 kubelet[2972]: E0128 01:34:14.892587 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:34:14.894020 kubelet[2972]: E0128 01:34:14.893279 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:34:14.975931 kubelet[2972]: E0128 01:34:14.893569 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx52m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:14.975931 kubelet[2972]: E0128 01:34:14.923397 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:34:15.101925 systemd[1]: run-containerd-runc-k8s.io-0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775-runc.xt9ugr.mount: Deactivated successfully. Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.479 [WARNING][6793] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0", GenerateName:"calico-kube-controllers-f96f445cb-", Namespace:"calico-system", SelfLink:"", UID:"7b83327f-83d8-4d0b-8be8-e67980a37b46", ResourceVersion:"1781", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f96f445cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dea2b593054ef775dd01f334549a46db24091a43291993a3e5e74638e6a1316f", Pod:"calico-kube-controllers-f96f445cb-js8kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia268763d958", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.480 [INFO][6793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.480 [INFO][6793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" iface="eth0" netns="" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.480 [INFO][6793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.480 [INFO][6793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.973 [INFO][6802] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.974 [INFO][6802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:14.974 [INFO][6802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:15.073 [WARNING][6802] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:15.078 [INFO][6802] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" HandleID="k8s-pod-network.3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Workload="localhost-k8s-calico--kube--controllers--f96f445cb--js8kb-eth0" Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:15.121 [INFO][6802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:34:15.182520 containerd[1622]: 2026-01-28 01:34:15.152 [INFO][6793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec" Jan 28 01:34:15.211801 containerd[1622]: time="2026-01-28T01:34:15.207733913Z" level=info msg="TearDown network for sandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" successfully" Jan 28 01:34:15.247162 containerd[1622]: time="2026-01-28T01:34:15.246961704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:34:15.247162 containerd[1622]: time="2026-01-28T01:34:15.247097387Z" level=info msg="RemovePodSandbox \"3003ea5e3de2cf17b270ac0f5605d94c75fd4181835e80a67a2c3aa4418b38ec\" returns successfully" Jan 28 01:34:15.262492 containerd[1622]: time="2026-01-28T01:34:15.260070352Z" level=info msg="StopPodSandbox for \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\"" Jan 28 01:34:15.610725 kubelet[2972]: E0128 01:34:15.610503 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:15.809 [WARNING][6834] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:15.810 [INFO][6834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:15.810 [INFO][6834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" iface="eth0" netns="" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:15.810 [INFO][6834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:15.810 [INFO][6834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.028 [INFO][6850] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.028 [INFO][6850] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.029 [INFO][6850] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.071 [WARNING][6850] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.071 [INFO][6850] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.084 [INFO][6850] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:34:16.187042 containerd[1622]: 2026-01-28 01:34:16.144 [INFO][6834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:16.187042 containerd[1622]: time="2026-01-28T01:34:16.183120872Z" level=info msg="TearDown network for sandbox \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\" successfully" Jan 28 01:34:16.187042 containerd[1622]: time="2026-01-28T01:34:16.183157260Z" level=info msg="StopPodSandbox for \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\" returns successfully" Jan 28 01:34:16.208432 containerd[1622]: time="2026-01-28T01:34:16.189109913Z" level=info msg="RemovePodSandbox for \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\"" Jan 28 01:34:16.208432 containerd[1622]: time="2026-01-28T01:34:16.189152963Z" level=info msg="Forcibly stopping sandbox \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\"" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:16.898 [WARNING][6868] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" WorkloadEndpoint="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:16.899 [INFO][6868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:16.899 [INFO][6868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" iface="eth0" netns="" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:16.899 [INFO][6868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:16.899 [INFO][6868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.277 [INFO][6876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.282 [INFO][6876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.282 [INFO][6876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.388 [WARNING][6876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.388 [INFO][6876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" HandleID="k8s-pod-network.d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Workload="localhost-k8s-whisker--59c86599d9--sc97f-eth0" Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.463 [INFO][6876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:34:17.624274 containerd[1622]: 2026-01-28 01:34:17.537 [INFO][6868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e" Jan 28 01:34:17.624274 containerd[1622]: time="2026-01-28T01:34:17.623002859Z" level=info msg="TearDown network for sandbox \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\" successfully" Jan 28 01:34:17.703952 containerd[1622]: time="2026-01-28T01:34:17.703864129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:34:17.703952 containerd[1622]: time="2026-01-28T01:34:17.703944979Z" level=info msg="RemovePodSandbox \"d69e5b7d42d22bbb415e52612106b037198e7beec47da43792edd7fcb5d6811e\" returns successfully" Jan 28 01:34:18.461735 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:56172.service - OpenSSH per-connection server daemon (10.0.0.1:56172). Jan 28 01:34:18.594573 kubelet[2972]: E0128 01:34:18.589359 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:18.603940 kubelet[2972]: E0128 01:34:18.601061 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:34:18.756069 sshd[6885]: Accepted publickey for core from 10.0.0.1 port 56172 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:18.817589 sshd[6885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:18.889243 systemd-logind[1612]: New session 16 of user core. Jan 28 01:34:18.965062 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:34:20.279358 sshd[6885]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:20.327707 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:56172.service: Deactivated successfully. Jan 28 01:34:20.344150 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:34:20.358784 systemd-logind[1612]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:34:20.433069 systemd-logind[1612]: Removed session 16. Jan 28 01:34:21.556384 kubelet[2972]: E0128 01:34:21.556291 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:23.788571 containerd[1622]: time="2026-01-28T01:34:23.785145049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:34:24.002546 containerd[1622]: time="2026-01-28T01:34:24.002493807Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:24.019734 containerd[1622]: time="2026-01-28T01:34:24.019426369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:34:24.019734 containerd[1622]: time="2026-01-28T01:34:24.019572512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:34:24.025335 kubelet[2972]: E0128 01:34:24.020896 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:34:24.025335 kubelet[2972]: E0128 01:34:24.020961 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:34:24.025335 kubelet[2972]: E0128 01:34:24.021124 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:24.025335 kubelet[2972]: E0128 01:34:24.023554 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:34:25.347007 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:51140.service - OpenSSH per-connection server daemon (10.0.0.1:51140). Jan 28 01:34:25.431413 sshd[6922]: Accepted publickey for core from 10.0.0.1 port 51140 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:25.434671 sshd[6922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:25.469816 systemd-logind[1612]: New session 17 of user core. Jan 28 01:34:25.502396 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:34:25.568496 containerd[1622]: time="2026-01-28T01:34:25.567564731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:34:25.693794 containerd[1622]: time="2026-01-28T01:34:25.688460931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:25.711720 containerd[1622]: time="2026-01-28T01:34:25.709463458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:34:25.711720 containerd[1622]: time="2026-01-28T01:34:25.709590094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:34:25.711956 kubelet[2972]: E0128 01:34:25.709918 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:34:25.711956 kubelet[2972]: E0128 01:34:25.709978 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:34:25.711956 kubelet[2972]: E0128 01:34:25.710112 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:25.714956 containerd[1622]: time="2026-01-28T01:34:25.714037150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:34:25.822494 containerd[1622]: time="2026-01-28T01:34:25.819848372Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:25.833447 containerd[1622]: time="2026-01-28T01:34:25.833283862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:34:25.833674 containerd[1622]: time="2026-01-28T01:34:25.833461733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:34:25.833894 kubelet[2972]: E0128 01:34:25.833832 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:34:25.833961 kubelet[2972]: E0128 01:34:25.833893 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:34:25.834155 kubelet[2972]: E0128 01:34:25.834018 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:25.874734 kubelet[2972]: E0128 01:34:25.872429 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:34:25.960442 sshd[6922]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:25.982072 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:51140.service: Deactivated successfully. Jan 28 01:34:26.005015 systemd-logind[1612]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:34:26.009587 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:34:26.014041 systemd-logind[1612]: Removed session 17. Jan 28 01:34:26.569957 kubelet[2972]: E0128 01:34:26.569902 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:34:27.570430 kubelet[2972]: E0128 01:34:27.560826 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:34:29.591484 containerd[1622]: time="2026-01-28T01:34:29.591380133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:34:29.786769 containerd[1622]: time="2026-01-28T01:34:29.785743629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:29.802526 containerd[1622]: time="2026-01-28T01:34:29.798856509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:34:29.802526 containerd[1622]: time="2026-01-28T01:34:29.799027589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:34:29.802854 kubelet[2972]: E0128 01:34:29.799554 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:34:29.802854 kubelet[2972]: E0128 01:34:29.799783 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:34:29.802854 kubelet[2972]: E0128 01:34:29.799949 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkb45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:29.805520 kubelet[2972]: E0128 01:34:29.805482 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:34:31.017475 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:51150.service - OpenSSH per-connection server daemon (10.0.0.1:51150). Jan 28 01:34:31.213545 sshd[6948]: Accepted publickey for core from 10.0.0.1 port 51150 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:31.226147 sshd[6948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:31.294817 systemd-logind[1612]: New session 18 of user core. Jan 28 01:34:31.324359 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:34:31.614806 kubelet[2972]: E0128 01:34:31.614203 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:32.092919 sshd[6948]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:32.121912 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:51150.service: Deactivated successfully. Jan 28 01:34:32.163017 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:34:32.177536 systemd-logind[1612]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:34:32.192874 systemd-logind[1612]: Removed session 18. Jan 28 01:34:32.557135 kubelet[2972]: E0128 01:34:32.556703 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:34:33.186522 update_engine[1613]: I20260128 01:34:33.186120 1613 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:34:33.186522 update_engine[1613]: I20260128 01:34:33.186206 1613 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:34:33.200976 update_engine[1613]: I20260128 01:34:33.200555 1613 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:34:33.202034 update_engine[1613]: I20260128 01:34:33.202006 1613 omaha_request_params.cc:62] Current group set to lts Jan 28 01:34:33.209143 update_engine[1613]: I20260128 01:34:33.209108 1613 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:34:33.209309 update_engine[1613]: I20260128 01:34:33.209225 1613 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:34:33.211512 update_engine[1613]: I20260128 01:34:33.209384 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:34:33.211512 update_engine[1613]: I20260128 01:34:33.209508 1613 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:34:33.211512 update_engine[1613]: I20260128 01:34:33.209693 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:34:33.211512 update_engine[1613]: I20260128 01:34:33.209714 1613 omaha_request_action.cc:272] Request: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: Jan 28 01:34:33.211512 update_engine[1613]: I20260128 01:34:33.209726 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:34:33.240091 update_engine[1613]: I20260128 01:34:33.239809 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:34:33.259003 update_engine[1613]: I20260128 01:34:33.258924 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:34:33.284793 update_engine[1613]: E20260128 01:34:33.284734 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:34:33.285051 update_engine[1613]: I20260128 01:34:33.285006 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:34:33.377478 locksmithd[1669]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:34:34.591046 containerd[1622]: time="2026-01-28T01:34:34.589581136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:34:34.806981 containerd[1622]: time="2026-01-28T01:34:34.805399168Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:34.839796 containerd[1622]: time="2026-01-28T01:34:34.839512717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:34:34.839796 containerd[1622]: time="2026-01-28T01:34:34.839728551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:34:34.844030 kubelet[2972]: E0128 01:34:34.840211 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:34:34.844030 kubelet[2972]: E0128 01:34:34.840350 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:34:34.844951 kubelet[2972]: E0128 01:34:34.844898 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:34.885152 containerd[1622]: time="2026-01-28T01:34:34.878739149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:34:35.069905 containerd[1622]: time="2026-01-28T01:34:35.068892868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:34:35.074504 containerd[1622]: time="2026-01-28T01:34:35.072467704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:34:35.074504 containerd[1622]: time="2026-01-28T01:34:35.072717981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:34:35.074779 kubelet[2972]: E0128 01:34:35.072880 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:34:35.074779 kubelet[2972]: E0128 01:34:35.072942 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:34:35.074779 kubelet[2972]: E0128 01:34:35.073074 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:34:35.075462 kubelet[2972]: E0128 01:34:35.075067 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:34:35.559974 kubelet[2972]: E0128 01:34:35.559076 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:34:37.200231 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:39834.service - OpenSSH per-connection server daemon (10.0.0.1:39834). Jan 28 01:34:37.805135 sshd[6966]: Accepted publickey for core from 10.0.0.1 port 39834 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:37.816708 sshd[6966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:37.868002 systemd-logind[1612]: New session 19 of user core. Jan 28 01:34:37.893097 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:34:38.404792 sshd[6966]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:38.413067 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:39834.service: Deactivated successfully. Jan 28 01:34:38.429845 systemd-logind[1612]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:34:38.430355 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:34:38.433560 systemd-logind[1612]: Removed session 19. Jan 28 01:34:38.573965 kubelet[2972]: E0128 01:34:38.573422 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:34:39.557924 kubelet[2972]: E0128 01:34:39.557237 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:34:40.614783 kubelet[2972]: E0128 01:34:40.614518 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:34:42.565732 kubelet[2972]: E0128 01:34:42.562915 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:34:43.476148 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:57192.service - OpenSSH per-connection server daemon (10.0.0.1:57192). Jan 28 01:34:43.682240 sshd[6982]: Accepted publickey for core from 10.0.0.1 port 57192 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:43.689882 sshd[6982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:43.703079 systemd-logind[1612]: New session 20 of user core. Jan 28 01:34:43.730457 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:34:44.036938 update_engine[1613]: I20260128 01:34:44.035723 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:34:44.036938 update_engine[1613]: I20260128 01:34:44.036154 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:34:44.042449 update_engine[1613]: I20260128 01:34:44.042315 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:34:44.077819 update_engine[1613]: E20260128 01:34:44.077463 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:34:44.077819 update_engine[1613]: I20260128 01:34:44.077585 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:34:44.415684 sshd[6982]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:44.465829 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:57192.service: Deactivated successfully. Jan 28 01:34:44.496878 systemd-logind[1612]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:34:44.498316 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:34:44.514916 systemd-logind[1612]: Removed session 20. Jan 28 01:34:46.618958 kubelet[2972]: E0128 01:34:46.616281 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:34:47.571848 kubelet[2972]: E0128 01:34:47.571053 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:34:49.562832 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:57198.service - OpenSSH per-connection server daemon (10.0.0.1:57198). Jan 28 01:34:49.797175 sshd[7020]: Accepted publickey for core from 10.0.0.1 port 57198 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:49.795991 sshd[7020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:49.821034 systemd-logind[1612]: New session 21 of user core. Jan 28 01:34:49.861813 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:34:50.773771 sshd[7020]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:50.797541 systemd-logind[1612]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:34:50.798188 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:57198.service: Deactivated successfully. Jan 28 01:34:50.839860 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:34:50.875325 systemd-logind[1612]: Removed session 21. Jan 28 01:34:51.570713 kubelet[2972]: E0128 01:34:51.567475 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:34:51.570713 kubelet[2972]: E0128 01:34:51.568150 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:34:53.561989 kubelet[2972]: E0128 01:34:53.561694 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:34:53.566081 kubelet[2972]: E0128 01:34:53.565148 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:34:54.038720 update_engine[1613]: I20260128 01:34:54.037889 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:34:54.038720 update_engine[1613]: I20260128 01:34:54.038309 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:34:54.040862 update_engine[1613]: I20260128 01:34:54.039992 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:34:54.066696 update_engine[1613]: E20260128 01:34:54.061705 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:34:54.066696 update_engine[1613]: I20260128 01:34:54.061821 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:34:55.803065 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:35926.service - OpenSSH per-connection server daemon (10.0.0.1:35926). Jan 28 01:34:55.921133 sshd[7038]: Accepted publickey for core from 10.0.0.1 port 35926 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:34:55.930837 sshd[7038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:34:55.996077 systemd-logind[1612]: New session 22 of user core. Jan 28 01:34:56.080840 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:34:56.727947 sshd[7038]: pam_unix(sshd:session): session closed for user core Jan 28 01:34:56.735432 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:35926.service: Deactivated successfully. Jan 28 01:34:56.747076 systemd-logind[1612]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:34:56.749254 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:34:56.755996 systemd-logind[1612]: Removed session 22. Jan 28 01:34:58.608325 kubelet[2972]: E0128 01:34:58.607424 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:35:00.560279 kubelet[2972]: E0128 01:35:00.558279 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:35:01.774754 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:35938.service - OpenSSH per-connection server daemon (10.0.0.1:35938). Jan 28 01:35:02.007913 sshd[7057]: Accepted publickey for core from 10.0.0.1 port 35938 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:02.011465 sshd[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:02.057305 systemd-logind[1612]: New session 23 of user core. Jan 28 01:35:02.100820 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:35:02.561841 kubelet[2972]: E0128 01:35:02.561236 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:35:02.663982 sshd[7057]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:02.846322 systemd[1]: Started sshd@23-10.0.0.77:22-10.0.0.1:36248.service - OpenSSH per-connection server daemon (10.0.0.1:36248). Jan 28 01:35:02.875457 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:35938.service: Deactivated successfully. Jan 28 01:35:02.935474 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:35:02.984415 systemd-logind[1612]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:35:02.987322 systemd-logind[1612]: Removed session 23. Jan 28 01:35:03.110702 sshd[7072]: Accepted publickey for core from 10.0.0.1 port 36248 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:03.130964 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:03.170947 systemd-logind[1612]: New session 24 of user core. Jan 28 01:35:03.207152 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:35:04.037288 update_engine[1613]: I20260128 01:35:04.035971 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:35:04.037288 update_engine[1613]: I20260128 01:35:04.036384 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:35:04.066400 update_engine[1613]: I20260128 01:35:04.041949 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:35:04.102799 update_engine[1613]: E20260128 01:35:04.082746 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.088114 1613 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.088160 1613 omaha_request_action.cc:617] Omaha request response: Jan 28 01:35:04.102799 update_engine[1613]: E20260128 01:35:04.093971 1613 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094136 1613 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094156 1613 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094166 1613 update_attempter.cc:306] Processing Done. Jan 28 01:35:04.102799 update_engine[1613]: E20260128 01:35:04.094192 1613 update_attempter.cc:619] Update failed. Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094205 1613 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094214 1613 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094224 1613 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094334 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094376 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:35:04.102799 update_engine[1613]: I20260128 01:35:04.094390 1613 omaha_request_action.cc:272] Request: Jan 28 01:35:04.102799 update_engine[1613]: Jan 28 01:35:04.102799 update_engine[1613]: Jan 28 01:35:04.103474 locksmithd[1669]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:35:04.115052 update_engine[1613]: Jan 28 01:35:04.115052 update_engine[1613]: Jan 28 01:35:04.115052 update_engine[1613]: Jan 28 01:35:04.115052 update_engine[1613]: Jan 28 01:35:04.115052 update_engine[1613]: I20260128 01:35:04.094403 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:35:04.115052 update_engine[1613]: I20260128 01:35:04.096920 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:35:04.115052 update_engine[1613]: I20260128 01:35:04.102685 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:35:04.124402 update_engine[1613]: E20260128 01:35:04.123940 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124048 1613 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124069 1613 omaha_request_action.cc:617] Omaha request response: Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124082 1613 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124094 1613 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124102 1613 update_attempter.cc:306] Processing Done. Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124114 1613 update_attempter.cc:310] Error event sent. Jan 28 01:35:04.124402 update_engine[1613]: I20260128 01:35:04.124133 1613 update_check_scheduler.cc:74] Next update check in 44m57s Jan 28 01:35:04.133123 locksmithd[1669]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:35:04.395131 sshd[7072]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:04.491115 systemd[1]: Started sshd@24-10.0.0.77:22-10.0.0.1:36262.service - OpenSSH per-connection server daemon (10.0.0.1:36262). Jan 28 01:35:04.502408 systemd[1]: sshd@23-10.0.0.77:22-10.0.0.1:36248.service: Deactivated successfully. Jan 28 01:35:04.531924 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:35:04.535478 systemd-logind[1612]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:35:04.559190 systemd-logind[1612]: Removed session 24. Jan 28 01:35:04.580403 kubelet[2972]: E0128 01:35:04.579823 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:35:04.634791 sshd[7086]: Accepted publickey for core from 10.0.0.1 port 36262 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:04.644812 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:04.675153 systemd-logind[1612]: New session 25 of user core. Jan 28 01:35:04.719296 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:35:05.727005 sshd[7086]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:05.734697 systemd-logind[1612]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:35:05.736923 systemd[1]: sshd@24-10.0.0.77:22-10.0.0.1:36262.service: Deactivated successfully. Jan 28 01:35:05.759326 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:35:05.761908 systemd-logind[1612]: Removed session 25. Jan 28 01:35:06.568787 kubelet[2972]: E0128 01:35:06.566170 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:06.787219 kubelet[2972]: E0128 01:35:06.783144 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:35:06.787219 kubelet[2972]: E0128 01:35:06.783287 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:35:10.571300 kubelet[2972]: E0128 01:35:10.561581 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:10.739230 systemd[1]: Started sshd@25-10.0.0.77:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Jan 28 01:35:10.885235 sshd[7106]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:10.895843 sshd[7106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:10.919472 systemd-logind[1612]: New session 26 of user core. Jan 28 01:35:10.936977 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:35:11.383856 sshd[7106]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:11.406399 systemd[1]: sshd@25-10.0.0.77:22-10.0.0.1:36264.service: Deactivated successfully. Jan 28 01:35:11.426036 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:35:11.433227 systemd-logind[1612]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:35:11.445359 systemd-logind[1612]: Removed session 26. Jan 28 01:35:13.574789 kubelet[2972]: E0128 01:35:13.569853 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:35:13.574789 kubelet[2972]: E0128 01:35:13.571082 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:35:13.582289 kubelet[2972]: E0128 01:35:13.577147 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:35:16.432524 systemd[1]: Started sshd@26-10.0.0.77:22-10.0.0.1:47444.service - OpenSSH per-connection server daemon (10.0.0.1:47444). Jan 28 01:35:16.652868 sshd[7144]: Accepted publickey for core from 10.0.0.1 port 47444 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:16.704550 sshd[7144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:16.868299 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:35:16.870801 systemd-logind[1612]: New session 27 of user core. Jan 28 01:35:17.572006 kubelet[2972]: E0128 01:35:17.570240 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:35:17.599247 sshd[7144]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:17.608302 systemd[1]: sshd@26-10.0.0.77:22-10.0.0.1:47444.service: Deactivated successfully. Jan 28 01:35:17.614751 systemd-logind[1612]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:35:17.616001 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:35:17.618373 systemd-logind[1612]: Removed session 27. Jan 28 01:35:18.602757 kubelet[2972]: E0128 01:35:18.599373 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:35:20.603954 kubelet[2972]: E0128 01:35:20.596511 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:35:22.609083 kubelet[2972]: E0128 01:35:22.608403 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:22.618440 kubelet[2972]: E0128 01:35:22.618351 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:22.657372 systemd[1]: Started sshd@27-10.0.0.77:22-10.0.0.1:58178.service - OpenSSH per-connection server daemon (10.0.0.1:58178). Jan 28 01:35:23.001869 sshd[7160]: Accepted publickey for core from 10.0.0.1 port 58178 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:23.009865 sshd[7160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:23.071870 systemd-logind[1612]: New session 28 of user core. Jan 28 01:35:23.086556 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:35:24.313479 sshd[7160]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:24.369474 systemd-logind[1612]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:35:24.378139 systemd[1]: sshd@27-10.0.0.77:22-10.0.0.1:58178.service: Deactivated successfully. Jan 28 01:35:24.399115 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:35:24.415037 systemd-logind[1612]: Removed session 28. Jan 28 01:35:24.570017 kubelet[2972]: E0128 01:35:24.569764 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:35:24.571428 kubelet[2972]: E0128 01:35:24.570288 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:35:25.617908 kubelet[2972]: E0128 01:35:25.617738 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:35:26.573164 kubelet[2972]: E0128 01:35:26.572031 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:29.438443 systemd[1]: Started sshd@28-10.0.0.77:22-10.0.0.1:58184.service - OpenSSH per-connection server daemon (10.0.0.1:58184). Jan 28 01:35:29.572982 kubelet[2972]: E0128 01:35:29.572930 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:30.030822 sshd[7176]: Accepted publickey for core from 10.0.0.1 port 58184 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:30.044388 sshd[7176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:30.122308 systemd-logind[1612]: New session 29 of user core. Jan 28 01:35:30.131218 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:35:31.229021 sshd[7176]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:31.258724 systemd[1]: sshd@28-10.0.0.77:22-10.0.0.1:58184.service: Deactivated successfully. Jan 28 01:35:31.299043 systemd-logind[1612]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:35:31.309393 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:35:31.312503 systemd-logind[1612]: Removed session 29. Jan 28 01:35:31.585200 kubelet[2972]: E0128 01:35:31.584686 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:35:31.595249 kubelet[2972]: E0128 01:35:31.588989 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:35:32.587152 kubelet[2972]: E0128 01:35:32.581738 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:35:36.273302 kubelet[2972]: E0128 01:35:36.271729 2972 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.703s" Jan 28 01:35:36.351875 systemd[1]: Started sshd@29-10.0.0.77:22-10.0.0.1:48530.service - OpenSSH per-connection server daemon (10.0.0.1:48530). Jan 28 01:35:36.603356 sshd[7194]: Accepted publickey for core from 10.0.0.1 port 48530 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:36.613160 sshd[7194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:36.665926 systemd-logind[1612]: New session 30 of user core. Jan 28 01:35:36.684348 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:35:37.512764 sshd[7194]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:37.541826 systemd-logind[1612]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:35:37.567093 systemd[1]: sshd@29-10.0.0.77:22-10.0.0.1:48530.service: Deactivated successfully. Jan 28 01:35:37.589191 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:35:37.598228 kubelet[2972]: E0128 01:35:37.573566 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:35:37.610296 containerd[1622]: time="2026-01-28T01:35:37.606427219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:35:37.611173 systemd-logind[1612]: Removed session 30. Jan 28 01:35:37.628487 kubelet[2972]: E0128 01:35:37.628087 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:35:37.812855 containerd[1622]: time="2026-01-28T01:35:37.810052014Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:35:37.821896 containerd[1622]: time="2026-01-28T01:35:37.820695049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:35:37.821896 containerd[1622]: time="2026-01-28T01:35:37.820834731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:35:37.826411 kubelet[2972]: E0128 01:35:37.824421 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:35:37.826411 kubelet[2972]: E0128 01:35:37.824519 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:35:37.826411 kubelet[2972]: E0128 01:35:37.824727 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx52m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:35:37.834747 kubelet[2972]: E0128 01:35:37.830124 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:35:40.576459 kubelet[2972]: E0128 01:35:40.576131 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:35:42.545196 systemd[1]: Started sshd@30-10.0.0.77:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Jan 28 01:35:42.897972 sshd[7217]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:42.913297 sshd[7217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:42.974033 systemd-logind[1612]: New session 31 of user core. Jan 28 01:35:42.990312 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:35:43.568126 kubelet[2972]: E0128 01:35:43.563740 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:35:43.564851 sshd[7217]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:43.588130 systemd[1]: sshd@30-10.0.0.77:22-10.0.0.1:49962.service: Deactivated successfully. Jan 28 01:35:43.617811 systemd-logind[1612]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:35:43.624684 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:35:43.631688 systemd-logind[1612]: Removed session 31. Jan 28 01:35:44.608051 kubelet[2972]: E0128 01:35:44.607891 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:35:45.561689 containerd[1622]: time="2026-01-28T01:35:45.560002919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:35:45.760990 containerd[1622]: time="2026-01-28T01:35:45.758845479Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:35:45.778813 containerd[1622]: time="2026-01-28T01:35:45.778687137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:35:45.794999 kubelet[2972]: E0128 01:35:45.789925 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:35:45.794999 kubelet[2972]: E0128 01:35:45.789991 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:35:45.853420 kubelet[2972]: E0128 01:35:45.804448 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4s7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:35:45.853420 kubelet[2972]: E0128 01:35:45.822445 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:35:45.853893 containerd[1622]: time="2026-01-28T01:35:45.803189946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:35:48.606939 systemd[1]: Started sshd@31-10.0.0.77:22-10.0.0.1:49966.service - OpenSSH per-connection server daemon (10.0.0.1:49966). Jan 28 01:35:48.797467 sshd[7255]: Accepted publickey for core from 10.0.0.1 port 49966 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:48.802995 sshd[7255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:48.845825 systemd-logind[1612]: New session 32 of user core. Jan 28 01:35:48.870333 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:35:49.474585 sshd[7255]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:49.489965 systemd[1]: sshd@31-10.0.0.77:22-10.0.0.1:49966.service: Deactivated successfully. Jan 28 01:35:49.518877 systemd-logind[1612]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:35:49.520078 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:35:49.541932 systemd-logind[1612]: Removed session 32. Jan 28 01:35:51.563786 kubelet[2972]: E0128 01:35:51.563002 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:35:52.579465 containerd[1622]: time="2026-01-28T01:35:52.575780973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:35:52.686506 containerd[1622]: time="2026-01-28T01:35:52.685008874Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:35:52.710431 containerd[1622]: time="2026-01-28T01:35:52.706838781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:35:52.710431 containerd[1622]: time="2026-01-28T01:35:52.706984121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:35:52.715447 kubelet[2972]: E0128 01:35:52.713423 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:35:52.715447 kubelet[2972]: E0128 01:35:52.713766 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:35:52.715447 kubelet[2972]: E0128 01:35:52.714278 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:35:52.717960 kubelet[2972]: E0128 01:35:52.715446 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:35:54.499002 systemd[1]: Started sshd@32-10.0.0.77:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Jan 28 01:35:54.637377 kubelet[2972]: E0128 01:35:54.629186 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:35:54.726052 sshd[7273]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:35:54.735217 sshd[7273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:35:54.782928 systemd-logind[1612]: New session 33 of user core. Jan 28 01:35:54.799712 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:35:55.254747 sshd[7273]: pam_unix(sshd:session): session closed for user core Jan 28 01:35:55.295578 systemd[1]: sshd@32-10.0.0.77:22-10.0.0.1:40818.service: Deactivated successfully. Jan 28 01:35:55.315023 systemd-logind[1612]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:35:55.316048 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:35:55.325581 systemd-logind[1612]: Removed session 33. Jan 28 01:35:56.581836 containerd[1622]: time="2026-01-28T01:35:56.578486586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:35:56.729534 containerd[1622]: time="2026-01-28T01:35:56.726363301Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:35:56.763739 containerd[1622]: time="2026-01-28T01:35:56.760999204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:35:56.763977 containerd[1622]: time="2026-01-28T01:35:56.763929943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:35:56.772145 kubelet[2972]: E0128 01:35:56.769016 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:35:56.772145 kubelet[2972]: E0128 01:35:56.769432 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:35:56.772145 kubelet[2972]: E0128 01:35:56.769910 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:35:56.780848 containerd[1622]: time="2026-01-28T01:35:56.775984500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:35:56.893141 containerd[1622]: time="2026-01-28T01:35:56.892073739Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:35:56.906076 containerd[1622]: time="2026-01-28T01:35:56.905518020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:35:56.906076 containerd[1622]: time="2026-01-28T01:35:56.905788177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:35:56.907800 kubelet[2972]: E0128 01:35:56.906570 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:35:56.907800 kubelet[2972]: E0128 01:35:56.907037 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:35:56.907800 kubelet[2972]: E0128 01:35:56.907397 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:35:56.912174 kubelet[2972]: E0128 01:35:56.910360 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:35:57.564172 containerd[1622]: time="2026-01-28T01:35:57.564122630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:35:57.798303 containerd[1622]: time="2026-01-28T01:35:57.798038947Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:35:57.817852 containerd[1622]: time="2026-01-28T01:35:57.817501916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:35:57.817852 containerd[1622]: time="2026-01-28T01:35:57.817717470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:35:57.821021 kubelet[2972]: E0128 01:35:57.820864 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:35:57.821021 kubelet[2972]: E0128 01:35:57.820993 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:35:57.824904 kubelet[2972]: E0128 01:35:57.821264 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkb45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dp6nh_calico-system(a0975c98-58e0-4afd-9150-95ec5af111e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:35:57.828022 kubelet[2972]: E0128 01:35:57.826191 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:36:00.318135 systemd[1]: Started sshd@33-10.0.0.77:22-10.0.0.1:40820.service - OpenSSH per-connection server daemon (10.0.0.1:40820). Jan 28 01:36:00.504580 sshd[7304]: Accepted publickey for core from 10.0.0.1 port 40820 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:00.507236 sshd[7304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:00.568844 systemd-logind[1612]: New session 34 of user core. Jan 28 01:36:00.574009 kubelet[2972]: E0128 01:36:00.571333 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:36:00.575040 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:36:00.924240 sshd[7304]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:00.931809 systemd[1]: sshd@33-10.0.0.77:22-10.0.0.1:40820.service: Deactivated successfully. Jan 28 01:36:00.949173 systemd-logind[1612]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:36:00.950836 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:36:00.974738 systemd-logind[1612]: Removed session 34. Jan 28 01:36:02.562589 kubelet[2972]: E0128 01:36:02.559214 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:03.563726 kubelet[2972]: E0128 01:36:03.563576 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:36:05.976210 systemd[1]: Started sshd@34-10.0.0.77:22-10.0.0.1:45724.service - OpenSSH per-connection server daemon (10.0.0.1:45724). Jan 28 01:36:06.123090 sshd[7329]: Accepted publickey for core from 10.0.0.1 port 45724 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:06.136786 sshd[7329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:06.157440 systemd-logind[1612]: New session 35 of user core. Jan 28 01:36:06.168453 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:36:06.568662 sshd[7329]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:06.575801 systemd[1]: sshd@34-10.0.0.77:22-10.0.0.1:45724.service: Deactivated successfully. Jan 28 01:36:06.587536 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:36:06.599900 systemd-logind[1612]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:36:06.603053 systemd-logind[1612]: Removed session 35. Jan 28 01:36:07.563875 kubelet[2972]: E0128 01:36:07.559088 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:36:08.576585 containerd[1622]: time="2026-01-28T01:36:08.575386189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:36:08.734719 containerd[1622]: time="2026-01-28T01:36:08.722034256Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:36:08.734719 containerd[1622]: time="2026-01-28T01:36:08.731788534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:36:08.734719 containerd[1622]: time="2026-01-28T01:36:08.731920011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:36:08.760980 kubelet[2972]: E0128 01:36:08.743824 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:36:08.760980 kubelet[2972]: E0128 01:36:08.743896 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:36:08.760980 kubelet[2972]: E0128 01:36:08.744026 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:08.768928 containerd[1622]: time="2026-01-28T01:36:08.768885367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:36:09.037982 containerd[1622]: time="2026-01-28T01:36:09.037143751Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:36:09.062580 containerd[1622]: time="2026-01-28T01:36:09.062447296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:36:09.063238 containerd[1622]: time="2026-01-28T01:36:09.062997330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:36:09.070447 kubelet[2972]: E0128 01:36:09.070375 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:36:09.070866 kubelet[2972]: E0128 01:36:09.070827 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:36:09.071399 kubelet[2972]: E0128 01:36:09.071265 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:36:09.076314 kubelet[2972]: E0128 01:36:09.076221 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:36:09.565270 kubelet[2972]: E0128 01:36:09.565205 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:36:10.607746 kubelet[2972]: E0128 01:36:10.605247 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:36:11.586422 systemd[1]: Started sshd@35-10.0.0.77:22-10.0.0.1:45732.service - OpenSSH per-connection server daemon (10.0.0.1:45732). Jan 28 01:36:11.826033 sshd[7347]: Accepted publickey for core from 10.0.0.1 port 45732 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:11.847309 sshd[7347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:11.873703 systemd-logind[1612]: New session 36 of user core. Jan 28 01:36:11.888119 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:36:12.263981 sshd[7347]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:12.276721 systemd[1]: sshd@35-10.0.0.77:22-10.0.0.1:45732.service: Deactivated successfully. Jan 28 01:36:12.303201 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:36:12.314386 systemd-logind[1612]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:36:12.325427 systemd-logind[1612]: Removed session 36. Jan 28 01:36:13.570102 kubelet[2972]: E0128 01:36:13.564143 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:13.570102 kubelet[2972]: E0128 01:36:13.568048 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:14.583838 kubelet[2972]: E0128 01:36:14.578859 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:36:17.350956 systemd[1]: Started sshd@36-10.0.0.77:22-10.0.0.1:55974.service - OpenSSH per-connection server daemon (10.0.0.1:55974). Jan 28 01:36:17.598097 kubelet[2972]: E0128 01:36:17.595275 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:36:17.739761 sshd[7387]: Accepted publickey for core from 10.0.0.1 port 55974 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:17.740148 sshd[7387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:17.787518 systemd-logind[1612]: New session 37 of user core. Jan 28 01:36:17.802993 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:36:18.749276 sshd[7387]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:18.790255 systemd-logind[1612]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:36:18.797228 systemd[1]: sshd@36-10.0.0.77:22-10.0.0.1:55974.service: Deactivated successfully. Jan 28 01:36:18.847121 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:36:18.851250 systemd-logind[1612]: Removed session 37. Jan 28 01:36:19.612688 kubelet[2972]: E0128 01:36:19.610477 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:36:34.533723 systemd-journald[1193]: Under memory pressure, flushing caches. Jan 28 01:36:32.974057 systemd-resolved[1501]: Under memory pressure, flushing caches. Jan 28 01:36:34.254888 systemd-resolved[1501]: Flushed all caches. Jan 28 01:36:38.325365 systemd[1]: Started sshd@37-10.0.0.77:22-10.0.0.1:40024.service - OpenSSH per-connection server daemon (10.0.0.1:40024). Jan 28 01:36:42.091838 sshd[7403]: Accepted publickey for core from 10.0.0.1 port 40024 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:42.094958 sshd[7403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:42.211708 systemd-logind[1612]: New session 38 of user core. Jan 28 01:36:42.287956 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:36:43.284693 kubelet[2972]: E0128 01:36:43.282687 2972 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="22.538s" Jan 28 01:36:43.328323 kubelet[2972]: E0128 01:36:43.328155 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:43.333494 kubelet[2972]: E0128 01:36:43.331707 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:43.335516 kubelet[2972]: E0128 01:36:43.332401 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:43.335516 kubelet[2972]: E0128 01:36:43.333371 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:36:43.335516 kubelet[2972]: E0128 01:36:43.333430 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:36:43.335516 kubelet[2972]: E0128 01:36:43.334408 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:43.353861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106-rootfs.mount: Deactivated successfully. Jan 28 01:36:43.362880 kubelet[2972]: E0128 01:36:43.356528 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:36:43.362880 kubelet[2972]: E0128 01:36:43.356810 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:36:43.362880 kubelet[2972]: E0128 01:36:43.356818 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:36:43.362880 kubelet[2972]: E0128 01:36:43.358776 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:36:43.447545 containerd[1622]: time="2026-01-28T01:36:43.447252609Z" level=info msg="shim disconnected" id=1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106 namespace=k8s.io Jan 28 01:36:43.453014 containerd[1622]: time="2026-01-28T01:36:43.452468740Z" level=warning msg="cleaning up after shim disconnected" id=1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106 namespace=k8s.io Jan 28 01:36:43.470440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3-rootfs.mount: Deactivated successfully. Jan 28 01:36:43.474879 containerd[1622]: time="2026-01-28T01:36:43.474134091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:36:43.483121 containerd[1622]: time="2026-01-28T01:36:43.482151566Z" level=info msg="shim disconnected" id=b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3 namespace=k8s.io Jan 28 01:36:43.483121 containerd[1622]: time="2026-01-28T01:36:43.482974817Z" level=warning msg="cleaning up after shim disconnected" id=b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3 namespace=k8s.io Jan 28 01:36:43.484333 containerd[1622]: time="2026-01-28T01:36:43.483587350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:36:43.486243 sshd[7403]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:43.520738 containerd[1622]: time="2026-01-28T01:36:43.519730624Z" level=error msg="collecting metrics for 1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106" error="ttrpc: closed: unknown" Jan 28 01:36:43.544461 systemd[1]: sshd@37-10.0.0.77:22-10.0.0.1:40024.service: Deactivated successfully. Jan 28 01:36:43.595134 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:36:43.614954 systemd-logind[1612]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:36:43.654985 systemd-logind[1612]: Removed session 38. Jan 28 01:36:43.801328 containerd[1622]: time="2026-01-28T01:36:43.801268194Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:36:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:36:44.283578 kubelet[2972]: I0128 01:36:44.281831 2972 scope.go:117] "RemoveContainer" containerID="1d87eeed1e24f6d401cb8717bce1fbfea16f656470730168d7b55d2cdc5f3106" Jan 28 01:36:44.283578 kubelet[2972]: E0128 01:36:44.281949 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:44.398698 containerd[1622]: time="2026-01-28T01:36:44.397562157Z" level=info msg="CreateContainer within sandbox \"e07aa3e424c7a1466817eda0cc49662afd40cae5a5c71e1a5760a2961f51bed1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 28 01:36:44.436134 kubelet[2972]: I0128 01:36:44.434833 2972 scope.go:117] "RemoveContainer" containerID="b901afbd08a487462080acb4a39f87b81f3562d6390f766434f799d54687c0f3" Jan 28 01:36:44.436134 kubelet[2972]: E0128 01:36:44.434962 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:44.482438 containerd[1622]: time="2026-01-28T01:36:44.479833183Z" level=info msg="CreateContainer within sandbox \"c3ab0e05591d7a73e82b94c1eb3a6dfd33e5be7e4ec9325435b0abcb78f905a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 28 01:36:44.667358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183467935.mount: Deactivated successfully. Jan 28 01:36:44.720314 containerd[1622]: time="2026-01-28T01:36:44.719861124Z" level=info msg="CreateContainer within sandbox \"e07aa3e424c7a1466817eda0cc49662afd40cae5a5c71e1a5760a2961f51bed1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0a3ece6f2964969154fdd4dfc78eec10bce85a12e487658758686a5d60fde8c7\"" Jan 28 01:36:44.721832 containerd[1622]: time="2026-01-28T01:36:44.721271530Z" level=info msg="StartContainer for \"0a3ece6f2964969154fdd4dfc78eec10bce85a12e487658758686a5d60fde8c7\"" Jan 28 01:36:44.772382 containerd[1622]: time="2026-01-28T01:36:44.771884220Z" level=info msg="CreateContainer within sandbox \"c3ab0e05591d7a73e82b94c1eb3a6dfd33e5be7e4ec9325435b0abcb78f905a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4bdeafd9b80fc9f5cf4bf15bc54fd8fa11c9c63699478b1fea73c43d700c55d8\"" Jan 28 01:36:44.777775 containerd[1622]: time="2026-01-28T01:36:44.777702614Z" level=info msg="StartContainer for \"4bdeafd9b80fc9f5cf4bf15bc54fd8fa11c9c63699478b1fea73c43d700c55d8\"" Jan 28 01:36:45.615420 containerd[1622]: time="2026-01-28T01:36:45.610026157Z" level=info msg="StartContainer for \"0a3ece6f2964969154fdd4dfc78eec10bce85a12e487658758686a5d60fde8c7\" returns successfully" Jan 28 01:36:45.715100 containerd[1622]: time="2026-01-28T01:36:45.714399632Z" level=info msg="StartContainer for \"4bdeafd9b80fc9f5cf4bf15bc54fd8fa11c9c63699478b1fea73c43d700c55d8\" returns successfully" Jan 28 01:36:46.817772 kubelet[2972]: E0128 01:36:46.816091 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:46.825642 kubelet[2972]: E0128 01:36:46.825429 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:47.868438 kubelet[2972]: E0128 01:36:47.865955 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:47.868438 kubelet[2972]: E0128 01:36:47.867578 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:48.607415 systemd[1]: Started sshd@38-10.0.0.77:22-10.0.0.1:55152.service - OpenSSH per-connection server daemon (10.0.0.1:55152). Jan 28 01:36:48.902007 kubelet[2972]: E0128 01:36:48.901883 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:48.914370 kubelet[2972]: E0128 01:36:48.913175 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:49.283036 sshd[7574]: Accepted publickey for core from 10.0.0.1 port 55152 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:49.298239 sshd[7574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:49.481053 systemd-logind[1612]: New session 39 of user core. Jan 28 01:36:49.508062 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:36:50.634923 sshd[7574]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:50.668305 systemd-logind[1612]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:36:50.684059 systemd[1]: sshd@38-10.0.0.77:22-10.0.0.1:55152.service: Deactivated successfully. Jan 28 01:36:50.693870 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:36:50.699159 systemd-logind[1612]: Removed session 39. Jan 28 01:36:55.581375 kubelet[2972]: E0128 01:36:55.578711 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:36:55.581375 kubelet[2972]: E0128 01:36:55.580042 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:36:55.698313 systemd[1]: Started sshd@39-10.0.0.77:22-10.0.0.1:49024.service - OpenSSH per-connection server daemon (10.0.0.1:49024). Jan 28 01:36:55.885380 sshd[7593]: Accepted publickey for core from 10.0.0.1 port 49024 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:36:55.885490 sshd[7593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:36:55.930079 systemd-logind[1612]: New session 40 of user core. Jan 28 01:36:55.977883 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:36:56.581093 kubelet[2972]: E0128 01:36:56.579931 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:36:56.784542 sshd[7593]: pam_unix(sshd:session): session closed for user core Jan 28 01:36:56.815998 systemd[1]: sshd@39-10.0.0.77:22-10.0.0.1:49024.service: Deactivated successfully. Jan 28 01:36:56.836989 systemd-logind[1612]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:36:56.846300 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:36:56.862505 systemd-logind[1612]: Removed session 40. Jan 28 01:36:57.570060 kubelet[2972]: E0128 01:36:57.565445 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:36:57.575244 kubelet[2972]: E0128 01:36:57.575181 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:36:57.885215 kubelet[2972]: E0128 01:36:57.884971 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:58.304444 kubelet[2972]: E0128 01:36:58.301830 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:36:58.564215 kubelet[2972]: E0128 01:36:58.563010 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:36:59.120713 kubelet[2972]: E0128 01:36:59.119388 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:01.566322 kubelet[2972]: E0128 01:37:01.565859 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:01.828103 systemd[1]: Started sshd@40-10.0.0.77:22-10.0.0.1:49028.service - OpenSSH per-connection server daemon (10.0.0.1:49028). Jan 28 01:37:02.346198 sshd[7610]: Accepted publickey for core from 10.0.0.1 port 49028 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:02.365348 sshd[7610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:02.479192 systemd-logind[1612]: New session 41 of user core. Jan 28 01:37:02.517122 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:37:03.378946 sshd[7610]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:03.411173 systemd[1]: sshd@40-10.0.0.77:22-10.0.0.1:49028.service: Deactivated successfully. Jan 28 01:37:03.446279 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:37:03.450052 systemd-logind[1612]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:37:03.467880 systemd-logind[1612]: Removed session 41. Jan 28 01:37:07.819028 kubelet[2972]: E0128 01:37:07.814493 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:37:10.700391 systemd[1]: Started sshd@41-10.0.0.77:22-10.0.0.1:37704.service - OpenSSH per-connection server daemon (10.0.0.1:37704). Jan 28 01:37:12.504737 kubelet[2972]: E0128 01:37:12.488951 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:37:12.576939 kubelet[2972]: E0128 01:37:12.574076 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:37:12.592331 kubelet[2972]: E0128 01:37:12.592234 2972 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.394s" Jan 28 01:37:12.708537 sshd[7628]: Accepted publickey for core from 10.0.0.1 port 37704 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:12.716906 sshd[7628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:12.782723 systemd-logind[1612]: New session 42 of user core. Jan 28 01:37:12.813540 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:37:12.815129 kubelet[2972]: E0128 01:37:12.801415 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:37:12.815129 kubelet[2972]: E0128 01:37:12.811094 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:37:12.815129 kubelet[2972]: E0128 01:37:12.812468 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:37:13.694725 sshd[7628]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:13.715955 systemd[1]: sshd@41-10.0.0.77:22-10.0.0.1:37704.service: Deactivated successfully. Jan 28 01:37:13.733106 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:37:13.761937 systemd-logind[1612]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:37:13.771250 systemd-logind[1612]: Removed session 42. Jan 28 01:37:18.771011 systemd[1]: Started sshd@42-10.0.0.77:22-10.0.0.1:52948.service - OpenSSH per-connection server daemon (10.0.0.1:52948). Jan 28 01:37:19.389816 sshd[7676]: Accepted publickey for core from 10.0.0.1 port 52948 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:19.416348 sshd[7676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:19.503823 systemd-logind[1612]: New session 43 of user core. Jan 28 01:37:19.557549 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:37:19.575540 kubelet[2972]: E0128 01:37:19.574381 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:19.608386 kubelet[2972]: E0128 01:37:19.605390 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:37:20.611794 systemd-journald[1193]: Under memory pressure, flushing caches. Jan 28 01:37:20.597235 systemd-resolved[1501]: Under memory pressure, flushing caches. Jan 28 01:37:20.597295 systemd-resolved[1501]: Flushed all caches. Jan 28 01:37:21.482577 sshd[7676]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:21.526319 systemd[1]: sshd@42-10.0.0.77:22-10.0.0.1:52948.service: Deactivated successfully. Jan 28 01:37:21.572752 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:37:21.605536 systemd-logind[1612]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:37:21.621743 systemd-logind[1612]: Removed session 43. Jan 28 01:37:24.590786 kubelet[2972]: E0128 01:37:24.590287 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:37:25.936411 kubelet[2972]: E0128 01:37:25.936181 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:37:25.940037 kubelet[2972]: E0128 01:37:25.939702 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:37:26.566533 systemd[1]: Started sshd@43-10.0.0.77:22-10.0.0.1:39884.service - OpenSSH per-connection server daemon (10.0.0.1:39884). Jan 28 01:37:26.588714 kubelet[2972]: E0128 01:37:26.574366 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:37:26.599306 kubelet[2972]: E0128 01:37:26.599236 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:37:26.841786 sshd[7705]: Accepted publickey for core from 10.0.0.1 port 39884 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:26.848015 sshd[7705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:26.879246 systemd-logind[1612]: New session 44 of user core. Jan 28 01:37:26.911698 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:37:27.877272 sshd[7705]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:27.934544 systemd-logind[1612]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:37:27.986467 systemd[1]: sshd@43-10.0.0.77:22-10.0.0.1:39884.service: Deactivated successfully. Jan 28 01:37:28.004991 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:37:28.032558 systemd-logind[1612]: Removed session 44. Jan 28 01:37:32.608778 kubelet[2972]: E0128 01:37:32.607554 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:37:32.717803 systemd[1]: Started sshd@44-10.0.0.77:22-10.0.0.1:50170.service - OpenSSH per-connection server daemon (10.0.0.1:50170). Jan 28 01:37:32.880852 sshd[7726]: Accepted publickey for core from 10.0.0.1 port 50170 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:32.904981 sshd[7726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:32.942540 systemd-logind[1612]: New session 45 of user core. Jan 28 01:37:32.971518 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:37:33.706982 sshd[7726]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:33.744875 systemd[1]: sshd@44-10.0.0.77:22-10.0.0.1:50170.service: Deactivated successfully. Jan 28 01:37:33.795148 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:37:33.812693 systemd-logind[1612]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:37:33.852470 systemd-logind[1612]: Removed session 45. Jan 28 01:37:36.573800 kubelet[2972]: E0128 01:37:36.571935 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:37:36.576850 kubelet[2972]: E0128 01:37:36.574897 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:37:38.747999 systemd[1]: Started sshd@45-10.0.0.77:22-10.0.0.1:50186.service - OpenSSH per-connection server daemon (10.0.0.1:50186). Jan 28 01:37:38.932523 sshd[7750]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:38.949093 sshd[7750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:38.972863 systemd-logind[1612]: New session 46 of user core. Jan 28 01:37:38.991142 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:37:39.454702 sshd[7750]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:39.466778 systemd[1]: sshd@45-10.0.0.77:22-10.0.0.1:50186.service: Deactivated successfully. Jan 28 01:37:39.482942 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:37:39.499681 systemd-logind[1612]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:37:39.515350 systemd[1]: Started sshd@46-10.0.0.77:22-10.0.0.1:50202.service - OpenSSH per-connection server daemon (10.0.0.1:50202). Jan 28 01:37:39.520077 systemd-logind[1612]: Removed session 46. Jan 28 01:37:39.566541 kubelet[2972]: E0128 01:37:39.565769 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:37:39.615808 sshd[7765]: Accepted publickey for core from 10.0.0.1 port 50202 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:39.625269 sshd[7765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:39.663121 systemd-logind[1612]: New session 47 of user core. Jan 28 01:37:39.691128 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:37:40.574658 kubelet[2972]: E0128 01:37:40.572570 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:37:41.111681 sshd[7765]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:41.136980 systemd[1]: Started sshd@47-10.0.0.77:22-10.0.0.1:50214.service - OpenSSH per-connection server daemon (10.0.0.1:50214). Jan 28 01:37:41.142794 systemd[1]: sshd@46-10.0.0.77:22-10.0.0.1:50202.service: Deactivated successfully. Jan 28 01:37:41.192938 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:37:41.204234 systemd-logind[1612]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:37:41.234757 systemd-logind[1612]: Removed session 47. Jan 28 01:37:41.545079 sshd[7776]: Accepted publickey for core from 10.0.0.1 port 50214 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:41.551442 sshd[7776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:41.556785 kubelet[2972]: E0128 01:37:41.556250 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:37:41.558770 kubelet[2972]: E0128 01:37:41.558023 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:41.571396 systemd-logind[1612]: New session 48 of user core. Jan 28 01:37:41.595706 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 28 01:37:43.835666 sshd[7776]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:43.896392 systemd[1]: Started sshd@48-10.0.0.77:22-10.0.0.1:45638.service - OpenSSH per-connection server daemon (10.0.0.1:45638). Jan 28 01:37:43.975965 systemd-logind[1612]: Session 48 logged out. Waiting for processes to exit. Jan 28 01:37:43.976822 systemd[1]: sshd@47-10.0.0.77:22-10.0.0.1:50214.service: Deactivated successfully. Jan 28 01:37:44.000816 systemd[1]: session-48.scope: Deactivated successfully. Jan 28 01:37:44.025182 systemd-logind[1612]: Removed session 48. Jan 28 01:37:44.127381 sshd[7795]: Accepted publickey for core from 10.0.0.1 port 45638 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:44.144339 sshd[7795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:45.916182 systemd-logind[1612]: New session 49 of user core. Jan 28 01:37:45.924977 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 28 01:37:45.948866 kubelet[2972]: E0128 01:37:45.945988 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:37:57.834730 kubelet[2972]: E0128 01:37:57.833095 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:37:57.962309 systemd[1]: run-containerd-runc-k8s.io-0f41e65c2dff4d8b21058d4f03b0f6e628652a2c84fcad95271bdf4b95ea4775-runc.hdBpPe.mount: Deactivated successfully. Jan 28 01:37:58.221738 kubelet[2972]: E0128 01:37:58.210733 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:37:58.232799 kubelet[2972]: E0128 01:37:58.227814 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:37:58.236370 kubelet[2972]: E0128 01:37:58.236002 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:37:58.277945 kubelet[2972]: E0128 01:37:58.274537 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:37:58.280221 kubelet[2972]: E0128 01:37:58.279588 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:37:59.032563 sshd[7795]: pam_unix(sshd:session): session closed for user core Jan 28 01:37:59.144489 systemd[1]: Started sshd@49-10.0.0.77:22-10.0.0.1:37836.service - OpenSSH per-connection server daemon (10.0.0.1:37836). Jan 28 01:37:59.155286 systemd[1]: sshd@48-10.0.0.77:22-10.0.0.1:45638.service: Deactivated successfully. Jan 28 01:37:59.186307 systemd[1]: session-49.scope: Deactivated successfully. Jan 28 01:37:59.194211 systemd-logind[1612]: Session 49 logged out. Waiting for processes to exit. Jan 28 01:37:59.205319 systemd-logind[1612]: Removed session 49. Jan 28 01:37:59.491025 sshd[7834]: Accepted publickey for core from 10.0.0.1 port 37836 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:37:59.504392 sshd[7834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:37:59.537263 systemd-logind[1612]: New session 50 of user core. Jan 28 01:37:59.547359 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 28 01:37:59.567467 kubelet[2972]: E0128 01:37:59.564703 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:38:00.716874 sshd[7834]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:00.725668 systemd-logind[1612]: Session 50 logged out. Waiting for processes to exit. Jan 28 01:38:00.729377 systemd[1]: sshd@49-10.0.0.77:22-10.0.0.1:37836.service: Deactivated successfully. Jan 28 01:38:00.738714 systemd[1]: session-50.scope: Deactivated successfully. Jan 28 01:38:00.741736 systemd-logind[1612]: Removed session 50. Jan 28 01:38:01.710815 kubelet[2972]: E0128 01:38:01.710708 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:05.795903 systemd[1]: Started sshd@50-10.0.0.77:22-10.0.0.1:37210.service - OpenSSH per-connection server daemon (10.0.0.1:37210). Jan 28 01:38:06.187474 sshd[7858]: Accepted publickey for core from 10.0.0.1 port 37210 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:06.204699 sshd[7858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:06.364790 systemd-logind[1612]: New session 51 of user core. Jan 28 01:38:06.389169 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 28 01:38:06.585044 kubelet[2972]: E0128 01:38:06.583788 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:07.192949 sshd[7858]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:07.207885 systemd-logind[1612]: Session 51 logged out. Waiting for processes to exit. Jan 28 01:38:07.210380 systemd[1]: sshd@50-10.0.0.77:22-10.0.0.1:37210.service: Deactivated successfully. Jan 28 01:38:07.224435 systemd[1]: session-51.scope: Deactivated successfully. Jan 28 01:38:07.238386 systemd-logind[1612]: Removed session 51. Jan 28 01:38:07.562096 kubelet[2972]: E0128 01:38:07.561480 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:09.561677 kubelet[2972]: E0128 01:38:09.561310 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:38:11.578668 kubelet[2972]: E0128 01:38:11.577354 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:38:11.589218 kubelet[2972]: E0128 01:38:11.587025 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:38:11.592053 kubelet[2972]: E0128 01:38:11.591371 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:38:12.234221 systemd[1]: Started sshd@51-10.0.0.77:22-10.0.0.1:37212.service - OpenSSH per-connection server daemon (10.0.0.1:37212). Jan 28 01:38:12.470728 sshd[7873]: Accepted publickey for core from 10.0.0.1 port 37212 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:12.480995 sshd[7873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:12.522699 systemd-logind[1612]: New session 52 of user core. Jan 28 01:38:12.530946 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 28 01:38:12.570746 kubelet[2972]: E0128 01:38:12.560919 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:12.570746 kubelet[2972]: E0128 01:38:12.564807 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:38:12.588796 kubelet[2972]: E0128 01:38:12.578752 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:38:13.200920 sshd[7873]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:13.218086 systemd-logind[1612]: Session 52 logged out. Waiting for processes to exit. Jan 28 01:38:13.220734 systemd[1]: sshd@51-10.0.0.77:22-10.0.0.1:37212.service: Deactivated successfully. Jan 28 01:38:13.277175 systemd[1]: session-52.scope: Deactivated successfully. Jan 28 01:38:13.302040 systemd-logind[1612]: Removed session 52. Jan 28 01:38:18.271375 systemd[1]: Started sshd@52-10.0.0.77:22-10.0.0.1:42504.service - OpenSSH per-connection server daemon (10.0.0.1:42504). Jan 28 01:38:18.402557 sshd[7909]: Accepted publickey for core from 10.0.0.1 port 42504 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:18.407877 sshd[7909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:18.446219 systemd-logind[1612]: New session 53 of user core. Jan 28 01:38:18.452283 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 28 01:38:19.103956 sshd[7909]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:19.128919 systemd[1]: sshd@52-10.0.0.77:22-10.0.0.1:42504.service: Deactivated successfully. Jan 28 01:38:19.145762 systemd-logind[1612]: Session 53 logged out. Waiting for processes to exit. Jan 28 01:38:19.159212 systemd[1]: session-53.scope: Deactivated successfully. Jan 28 01:38:19.188830 systemd-logind[1612]: Removed session 53. Jan 28 01:38:20.588259 kubelet[2972]: E0128 01:38:20.586990 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:22.592862 kubelet[2972]: E0128 01:38:22.591122 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:38:22.592862 kubelet[2972]: E0128 01:38:22.591870 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:38:23.568539 kubelet[2972]: E0128 01:38:23.566930 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:38:23.568539 kubelet[2972]: E0128 01:38:23.567066 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:38:24.128204 systemd[1]: Started sshd@53-10.0.0.77:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Jan 28 01:38:24.264907 sshd[7931]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:24.272455 sshd[7931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:24.301977 systemd-logind[1612]: New session 54 of user core. Jan 28 01:38:24.333036 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 28 01:38:24.815140 sshd[7931]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:24.832057 systemd-logind[1612]: Session 54 logged out. Waiting for processes to exit. Jan 28 01:38:24.836678 systemd[1]: sshd@53-10.0.0.77:22-10.0.0.1:60290.service: Deactivated successfully. Jan 28 01:38:24.854337 systemd[1]: session-54.scope: Deactivated successfully. Jan 28 01:38:24.870455 systemd-logind[1612]: Removed session 54. Jan 28 01:38:26.565373 kubelet[2972]: E0128 01:38:26.563747 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:38:26.566228 containerd[1622]: time="2026-01-28T01:38:26.564832364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:38:26.669217 containerd[1622]: time="2026-01-28T01:38:26.668906111Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:26.686165 containerd[1622]: time="2026-01-28T01:38:26.685166119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:38:26.686165 containerd[1622]: time="2026-01-28T01:38:26.685485392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:38:26.686777 kubelet[2972]: E0128 01:38:26.686571 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:38:26.686949 kubelet[2972]: E0128 01:38:26.686925 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:38:26.687301 kubelet[2972]: E0128 01:38:26.687241 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx52m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-5qb5l_calico-apiserver(25dca920-f21c-49d2-adf9-753622c450d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:26.691011 kubelet[2972]: E0128 01:38:26.690873 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:38:29.893497 systemd[1]: Started sshd@54-10.0.0.77:22-10.0.0.1:60306.service - OpenSSH per-connection server daemon (10.0.0.1:60306). Jan 28 01:38:29.993463 sshd[7950]: Accepted publickey for core from 10.0.0.1 port 60306 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:30.009555 sshd[7950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:30.077883 systemd-logind[1612]: New session 55 of user core. Jan 28 01:38:30.093815 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 28 01:38:30.581050 kubelet[2972]: E0128 01:38:30.580490 2972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:38:31.108031 sshd[7950]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:31.127713 systemd[1]: sshd@54-10.0.0.77:22-10.0.0.1:60306.service: Deactivated successfully. Jan 28 01:38:31.161509 systemd[1]: session-55.scope: Deactivated successfully. Jan 28 01:38:31.163937 systemd-logind[1612]: Session 55 logged out. Waiting for processes to exit. Jan 28 01:38:31.178926 systemd-logind[1612]: Removed session 55. Jan 28 01:38:34.573045 kubelet[2972]: E0128 01:38:34.572351 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:38:34.584008 containerd[1622]: time="2026-01-28T01:38:34.580087236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:38:34.764008 containerd[1622]: time="2026-01-28T01:38:34.762533331Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:34.781538 containerd[1622]: time="2026-01-28T01:38:34.781414949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:38:34.784287 containerd[1622]: time="2026-01-28T01:38:34.782684967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:38:34.784453 kubelet[2972]: E0128 01:38:34.783526 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:38:34.784453 kubelet[2972]: E0128 01:38:34.783584 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:38:34.784453 kubelet[2972]: E0128 01:38:34.783909 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4s7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69686dc768-ln9mw_calico-apiserver(293f11a4-1519-4e40-8e4f-23ffad2f9d2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:34.791162 kubelet[2972]: E0128 01:38:34.786462 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:38:36.172303 systemd[1]: Started sshd@55-10.0.0.77:22-10.0.0.1:41642.service - OpenSSH per-connection server daemon (10.0.0.1:41642). Jan 28 01:38:36.342326 sshd[7968]: Accepted publickey for core from 10.0.0.1 port 41642 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:36.369719 sshd[7968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:36.408096 systemd-logind[1612]: New session 56 of user core. Jan 28 01:38:36.440192 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 28 01:38:36.596705 containerd[1622]: time="2026-01-28T01:38:36.587107069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:38:36.597364 kubelet[2972]: E0128 01:38:36.595188 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dp6nh" podUID="a0975c98-58e0-4afd-9150-95ec5af111e8" Jan 28 01:38:36.720182 containerd[1622]: time="2026-01-28T01:38:36.719001954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:36.731217 containerd[1622]: time="2026-01-28T01:38:36.726539366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:38:36.731217 containerd[1622]: time="2026-01-28T01:38:36.726759030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:38:36.739837 kubelet[2972]: E0128 01:38:36.731558 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:38:36.739837 kubelet[2972]: E0128 01:38:36.731728 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:38:36.739837 kubelet[2972]: E0128 01:38:36.737722 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f96f445cb-js8kb_calico-system(7b83327f-83d8-4d0b-8be8-e67980a37b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:36.740349 kubelet[2972]: E0128 01:38:36.740160 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f96f445cb-js8kb" podUID="7b83327f-83d8-4d0b-8be8-e67980a37b46" Jan 28 01:38:37.035734 sshd[7968]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:37.069490 systemd[1]: sshd@55-10.0.0.77:22-10.0.0.1:41642.service: Deactivated successfully. Jan 28 01:38:37.094242 systemd-logind[1612]: Session 56 logged out. Waiting for processes to exit. Jan 28 01:38:37.094334 systemd[1]: session-56.scope: Deactivated successfully. Jan 28 01:38:37.111226 systemd-logind[1612]: Removed session 56. Jan 28 01:38:38.569426 kubelet[2972]: E0128 01:38:38.568673 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141" Jan 28 01:38:42.099188 systemd[1]: Started sshd@56-10.0.0.77:22-10.0.0.1:41650.service - OpenSSH per-connection server daemon (10.0.0.1:41650). Jan 28 01:38:42.230314 sshd[7984]: Accepted publickey for core from 10.0.0.1 port 41650 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:42.246094 sshd[7984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:42.278923 systemd-logind[1612]: New session 57 of user core. Jan 28 01:38:42.294813 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 28 01:38:42.596762 kubelet[2972]: E0128 01:38:42.586191 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-5qb5l" podUID="25dca920-f21c-49d2-adf9-753622c450d8" Jan 28 01:38:42.942522 sshd[7984]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:42.951242 systemd[1]: sshd@56-10.0.0.77:22-10.0.0.1:41650.service: Deactivated successfully. Jan 28 01:38:42.966513 systemd-logind[1612]: Session 57 logged out. Waiting for processes to exit. Jan 28 01:38:42.977986 systemd[1]: session-57.scope: Deactivated successfully. Jan 28 01:38:42.984929 systemd-logind[1612]: Removed session 57. Jan 28 01:38:46.597756 kubelet[2972]: E0128 01:38:46.589336 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69686dc768-ln9mw" podUID="293f11a4-1519-4e40-8e4f-23ffad2f9d2d" Jan 28 01:38:46.606367 containerd[1622]: time="2026-01-28T01:38:46.605929619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:38:46.738993 containerd[1622]: time="2026-01-28T01:38:46.737428937Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:46.757448 containerd[1622]: time="2026-01-28T01:38:46.756330781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:38:46.757448 containerd[1622]: time="2026-01-28T01:38:46.756469303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:38:46.757812 kubelet[2972]: E0128 01:38:46.756730 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:38:46.757812 kubelet[2972]: E0128 01:38:46.756786 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:38:46.757812 kubelet[2972]: E0128 01:38:46.756922 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:46.760249 containerd[1622]: time="2026-01-28T01:38:46.760217061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:38:46.902957 containerd[1622]: time="2026-01-28T01:38:46.901792050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:46.920269 containerd[1622]: time="2026-01-28T01:38:46.919998822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:38:46.920269 containerd[1622]: time="2026-01-28T01:38:46.920114651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:38:46.921790 kubelet[2972]: E0128 01:38:46.921736 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:38:46.922453 kubelet[2972]: E0128 01:38:46.921922 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:38:46.922453 kubelet[2972]: E0128 01:38:46.922110 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tp26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9gwj5_calico-system(b4b5e90d-930c-4b60-ab0a-ec73967e82da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:46.926580 kubelet[2972]: E0128 01:38:46.926505 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gwj5" podUID="b4b5e90d-930c-4b60-ab0a-ec73967e82da" Jan 28 01:38:47.989442 systemd[1]: Started sshd@57-10.0.0.77:22-10.0.0.1:53494.service - OpenSSH per-connection server daemon (10.0.0.1:53494). Jan 28 01:38:48.238802 sshd[8023]: Accepted publickey for core from 10.0.0.1 port 53494 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:38:48.244556 sshd[8023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:38:48.291427 systemd-logind[1612]: New session 58 of user core. Jan 28 01:38:48.330912 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 28 01:38:49.382468 sshd[8023]: pam_unix(sshd:session): session closed for user core Jan 28 01:38:49.515153 systemd[1]: sshd@57-10.0.0.77:22-10.0.0.1:53494.service: Deactivated successfully. Jan 28 01:38:49.565590 systemd-logind[1612]: Session 58 logged out. Waiting for processes to exit. Jan 28 01:38:49.572500 systemd[1]: session-58.scope: Deactivated successfully. Jan 28 01:38:49.632448 systemd-logind[1612]: Removed session 58. Jan 28 01:38:50.593576 containerd[1622]: time="2026-01-28T01:38:50.589785408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:38:50.816853 containerd[1622]: time="2026-01-28T01:38:50.816470410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:50.834917 containerd[1622]: time="2026-01-28T01:38:50.834861944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:38:50.842741 containerd[1622]: time="2026-01-28T01:38:50.835083392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:38:50.870390 kubelet[2972]: E0128 01:38:50.843530 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:38:50.870390 kubelet[2972]: E0128 01:38:50.843699 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:38:50.870390 kubelet[2972]: E0128 01:38:50.843837 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c57ca85a0f704f7f9110497d6a428efd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:50.874695 containerd[1622]: time="2026-01-28T01:38:50.874526963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:38:50.976425 containerd[1622]: time="2026-01-28T01:38:50.976357734Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:38:50.986407 containerd[1622]: time="2026-01-28T01:38:50.986342334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:38:50.986900 containerd[1622]: time="2026-01-28T01:38:50.986449196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:38:50.991436 kubelet[2972]: E0128 01:38:50.987194 2972 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:38:50.991436 kubelet[2972]: E0128 01:38:50.987320 2972 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:38:50.991436 kubelet[2972]: E0128 01:38:50.988668 2972 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nqlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f975b9dd9-g8mzf_calico-system(5859ba2a-a016-4346-9bde-cada03fa1141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:38:50.991436 kubelet[2972]: E0128 01:38:50.989830 2972 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f975b9dd9-g8mzf" podUID="5859ba2a-a016-4346-9bde-cada03fa1141"