Jan 24 00:41:14.515342 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:41:14.515371 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:41:14.515388 kernel: BIOS-provided physical RAM map: Jan 24 00:41:14.515398 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:41:14.515407 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:41:14.515417 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:41:14.515426 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:41:14.515431 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:41:14.515437 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:41:14.515539 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:41:14.515551 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:41:14.515560 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:41:14.515569 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:41:14.515578 kernel: NX (Execute Disable) protection: active Jan 24 00:41:14.515589 kernel: APIC: Static calls initialized Jan 24 00:41:14.515600 kernel: SMBIOS 2.8 present. Jan 24 00:41:14.515607 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:41:14.515612 kernel: Hypervisor detected: KVM Jan 24 00:41:14.515618 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:41:14.515624 kernel: kvm-clock: using sched offset of 6071231691 cycles Jan 24 00:41:14.515631 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:41:14.515637 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:41:14.515643 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:41:14.515649 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:41:14.515658 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:41:14.515664 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:41:14.515676 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:41:14.515687 kernel: Using GB pages for direct mapping Jan 24 00:41:14.515698 kernel: ACPI: Early table checksum verification disabled Jan 24 00:41:14.515708 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:41:14.515715 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515721 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515728 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515737 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:41:14.515743 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515752 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515762 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515773 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:41:14.515783 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:41:14.515791 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:41:14.515801 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:41:14.515809 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:41:14.515816 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:41:14.515822 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:41:14.515828 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:41:14.515834 kernel: No NUMA configuration found Jan 24 00:41:14.515841 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:41:14.515849 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:41:14.515855 kernel: Zone ranges: Jan 24 00:41:14.515862 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:41:14.515868 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:41:14.515874 kernel: Normal empty Jan 24 00:41:14.515880 kernel: Movable zone start for each node Jan 24 00:41:14.515886 kernel: Early memory node ranges Jan 24 00:41:14.515894 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:41:14.515905 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:41:14.515916 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:41:14.515931 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:41:14.515937 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:41:14.515943 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:41:14.515949 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:41:14.515956 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:41:14.515962 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:41:14.515968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:41:14.515975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:41:14.515981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:41:14.515989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:41:14.515996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:41:14.516002 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:41:14.516008 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:41:14.516014 kernel: TSC deadline timer available Jan 24 00:41:14.516020 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:41:14.516031 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:41:14.516043 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:41:14.516054 kernel: kvm-guest: setup PV sched yield Jan 24 00:41:14.516065 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:41:14.516071 kernel: Booting paravirtualized kernel on KVM Jan 24 00:41:14.516078 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:41:14.516084 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:41:14.516090 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:41:14.516097 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:41:14.516103 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:41:14.516182 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:41:14.516189 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:41:14.516200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:41:14.516206 kernel: random: crng init done Jan 24 00:41:14.516213 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:41:14.516219 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:41:14.516225 kernel: Fallback order for Node 0: 0 Jan 24 00:41:14.516231 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:41:14.516237 kernel: Policy zone: DMA32 Jan 24 00:41:14.516244 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:41:14.516250 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:41:14.516259 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:41:14.516265 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:41:14.516271 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:41:14.516277 kernel: Dynamic Preempt: voluntary Jan 24 00:41:14.516283 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:41:14.516290 kernel: rcu: RCU event tracing is enabled. Jan 24 00:41:14.516297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:41:14.516303 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:41:14.516310 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:41:14.516318 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:41:14.516324 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:41:14.516331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:41:14.516337 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:41:14.516343 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:41:14.516349 kernel: Console: colour VGA+ 80x25 Jan 24 00:41:14.516355 kernel: printk: console [ttyS0] enabled Jan 24 00:41:14.516361 kernel: ACPI: Core revision 20230628 Jan 24 00:41:14.516367 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:41:14.516376 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:41:14.516382 kernel: x2apic enabled Jan 24 00:41:14.516388 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:41:14.516394 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:41:14.516401 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:41:14.516407 kernel: kvm-guest: setup PV IPIs Jan 24 00:41:14.516413 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:41:14.516429 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:41:14.516436 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:41:14.516630 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:41:14.516642 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:41:14.516649 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:41:14.516660 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:41:14.516666 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:41:14.516673 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:41:14.516679 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:41:14.516686 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:41:14.516695 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:41:14.516702 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:41:14.516709 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:41:14.516715 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:41:14.516722 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:41:14.516729 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:41:14.516735 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:41:14.516741 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:41:14.516750 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:41:14.516757 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:41:14.516764 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:41:14.516770 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:41:14.516776 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:41:14.516783 kernel: landlock: Up and running. Jan 24 00:41:14.516789 kernel: SELinux: Initializing. Jan 24 00:41:14.516796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:41:14.516802 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:41:14.516812 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:41:14.516818 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:41:14.516825 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:41:14.516831 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:41:14.516838 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:41:14.516844 kernel: signal: max sigframe size: 1776 Jan 24 00:41:14.516851 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:41:14.516858 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:41:14.516864 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:41:14.516873 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:41:14.516880 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:41:14.516886 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:41:14.516892 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:41:14.516899 kernel: smpboot: Max logical packages: 1 Jan 24 00:41:14.516905 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:41:14.516912 kernel: devtmpfs: initialized Jan 24 00:41:14.516918 kernel: x86/mm: Memory block size: 128MB Jan 24 00:41:14.516924 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:41:14.516933 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:41:14.516940 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:41:14.516946 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:41:14.516953 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:41:14.516959 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:41:14.516966 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:41:14.516972 kernel: audit: type=2000 audit(1769215270.785:1): state=initialized audit_enabled=0 res=1 Jan 24 00:41:14.516979 kernel: cpuidle: using governor menu Jan 24 00:41:14.516985 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:41:14.516994 kernel: dca service started, version 1.12.1 Jan 24 00:41:14.517000 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:41:14.517007 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:41:14.517014 kernel: PCI: Using configuration type 1 for base access Jan 24 00:41:14.517020 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:41:14.517027 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:41:14.517033 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:41:14.517040 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:41:14.517046 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:41:14.517055 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:41:14.517062 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:41:14.517068 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:41:14.517075 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:41:14.517081 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:41:14.517088 kernel: ACPI: Interpreter enabled Jan 24 00:41:14.517094 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:41:14.517100 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:41:14.517173 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:41:14.517184 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:41:14.517191 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:41:14.517197 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:41:14.517407 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:41:14.517631 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:41:14.517763 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:41:14.517773 kernel: PCI host bridge to bus 0000:00 Jan 24 00:41:14.517905 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:41:14.518017 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:41:14.518200 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:41:14.518317 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:41:14.518431 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:41:14.518823 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:41:14.519227 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:41:14.519380 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:41:14.519603 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:41:14.519732 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:41:14.519882 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:41:14.520008 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:41:14.520204 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:41:14.520373 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:41:14.520614 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:41:14.520764 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:41:14.520907 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:41:14.521191 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:41:14.521322 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:41:14.521516 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:41:14.521654 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:41:14.521784 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:41:14.521905 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:41:14.522023 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:41:14.522218 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:41:14.522342 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:41:14.522555 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:41:14.522687 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:41:14.522841 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:41:14.522964 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:41:14.523082 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:41:14.523288 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:41:14.523411 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:41:14.523425 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:41:14.523432 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:41:14.523439 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:41:14.523521 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:41:14.523528 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:41:14.523535 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:41:14.523541 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:41:14.523548 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:41:14.523555 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:41:14.523564 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:41:14.523571 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:41:14.523578 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:41:14.523584 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:41:14.523591 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:41:14.523597 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:41:14.523604 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:41:14.523611 kernel: iommu: Default domain type: Translated Jan 24 00:41:14.523617 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:41:14.523626 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:41:14.523633 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:41:14.523639 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:41:14.523646 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:41:14.523778 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:41:14.523957 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:41:14.524390 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:41:14.524409 kernel: vgaarb: loaded Jan 24 00:41:14.524421 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:41:14.524439 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:41:14.524624 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:41:14.524637 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:41:14.524649 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:41:14.524661 kernel: pnp: PnP ACPI init Jan 24 00:41:14.524856 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:41:14.524876 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:41:14.524887 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:41:14.524905 kernel: NET: Registered PF_INET protocol family Jan 24 00:41:14.524915 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:41:14.524927 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:41:14.524940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:41:14.524950 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:41:14.524959 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:41:14.524972 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:41:14.524983 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:41:14.524993 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:41:14.525011 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:41:14.525023 kernel: NET: Registered PF_XDP protocol family Jan 24 00:41:14.525238 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:41:14.525354 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:41:14.525549 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:41:14.525665 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:41:14.525774 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:41:14.525882 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:41:14.525896 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:41:14.525903 kernel: Initialise system trusted keyrings Jan 24 00:41:14.525910 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:41:14.525917 kernel: Key type asymmetric registered Jan 24 00:41:14.525923 kernel: Asymmetric key parser 'x509' registered Jan 24 00:41:14.525930 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:41:14.525936 kernel: io scheduler mq-deadline registered Jan 24 00:41:14.525943 kernel: io scheduler kyber registered Jan 24 00:41:14.525949 kernel: io scheduler bfq registered Jan 24 00:41:14.525959 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:41:14.525966 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:41:14.525973 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:41:14.525980 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:41:14.525986 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:41:14.525993 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:41:14.526000 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:41:14.526006 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:41:14.526013 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:41:14.526214 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:41:14.526230 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 24 00:41:14.526380 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:41:14.526638 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:41:13 UTC (1769215273) Jan 24 00:41:14.526803 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:41:14.526821 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:41:14.526834 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:41:14.526846 kernel: Segment Routing with IPv6 Jan 24 00:41:14.526861 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:41:14.526873 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:41:14.526884 kernel: Key type dns_resolver registered Jan 24 00:41:14.526894 kernel: IPI shorthand broadcast: enabled Jan 24 00:41:14.526905 kernel: sched_clock: Marking stable (2300078290, 685383030)->(3417470953, -432009633) Jan 24 00:41:14.526917 kernel: registered taskstats version 1 Jan 24 00:41:14.526927 kernel: Loading compiled-in X.509 certificates Jan 24 00:41:14.526938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:41:14.526948 kernel: Key type .fscrypt registered Jan 24 00:41:14.526962 kernel: Key type fscrypt-provisioning registered Jan 24 00:41:14.526973 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:41:14.526983 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:41:14.526993 kernel: ima: No architecture policies found Jan 24 00:41:14.527003 kernel: clk: Disabling unused clocks Jan 24 00:41:14.527013 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:41:14.527024 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:41:14.527034 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:41:14.527044 kernel: Run /init as init process Jan 24 00:41:14.527058 kernel: with arguments: Jan 24 00:41:14.527068 kernel: /init Jan 24 00:41:14.527079 kernel: with environment: Jan 24 00:41:14.527089 kernel: HOME=/ Jan 24 00:41:14.527099 kernel: TERM=linux Jan 24 00:41:14.527184 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:41:14.527201 systemd[1]: Detected virtualization kvm. Jan 24 00:41:14.527212 systemd[1]: Detected architecture x86-64. Jan 24 00:41:14.527227 systemd[1]: Running in initrd. Jan 24 00:41:14.527239 systemd[1]: No hostname configured, using default hostname. Jan 24 00:41:14.527250 systemd[1]: Hostname set to . Jan 24 00:41:14.527262 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:41:14.527274 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:41:14.527288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:41:14.527299 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:41:14.527312 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:41:14.527328 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:41:14.527339 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:41:14.527351 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:41:14.527366 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:41:14.527378 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:41:14.527390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:41:14.527405 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:41:14.527417 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:41:14.527429 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:41:14.527441 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:41:14.527567 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:41:14.527584 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:41:14.527596 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:41:14.527612 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:41:14.527627 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:41:14.527638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:41:14.527650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:41:14.527661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:41:14.527672 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:41:14.527683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:41:14.527695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:41:14.527709 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:41:14.527720 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:41:14.527732 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:41:14.527745 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:41:14.527757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:14.527800 systemd-journald[190]: Collecting audit messages is disabled. Jan 24 00:41:14.527833 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:41:14.527845 systemd-journald[190]: Journal started Jan 24 00:41:14.527868 systemd-journald[190]: Runtime Journal (/run/log/journal/4a46aca457d14808b39c50e4a229d528) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:41:14.535614 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:41:14.541899 systemd-modules-load[193]: Inserted module 'overlay' Jan 24 00:41:14.542204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:41:14.542804 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:41:14.562441 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:41:14.589257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:41:14.596543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:41:14.610664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:41:14.657672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:41:14.661400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:41:14.663361 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 24 00:41:14.677986 kernel: Bridge firewalling registered Jan 24 00:41:14.682085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:41:14.692289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:41:14.729716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:41:15.006325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:41:15.008950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:15.047022 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:41:15.056959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:41:15.089412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:41:15.091644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:41:15.119559 dracut-cmdline[231]: dracut-dracut-053 Jan 24 00:41:15.124823 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:41:15.195375 systemd-resolved[221]: Positive Trust Anchors: Jan 24 00:41:15.195429 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:41:15.195619 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:41:15.204213 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 24 00:41:15.205789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:41:15.247977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:41:15.293021 kernel: SCSI subsystem initialized Jan 24 00:41:15.306736 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:41:15.325959 kernel: iscsi: registered transport (tcp) Jan 24 00:41:15.362284 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:41:15.363888 kernel: QLogic iSCSI HBA Driver Jan 24 00:41:15.451709 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:41:15.475662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:41:15.530883 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:41:15.531040 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:41:15.531058 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:41:15.608864 kernel: raid6: avx2x4 gen() 28078 MB/s Jan 24 00:41:15.629653 kernel: raid6: avx2x2 gen() 23312 MB/s Jan 24 00:41:15.652081 kernel: raid6: avx2x1 gen() 15145 MB/s Jan 24 00:41:15.652319 kernel: raid6: using algorithm avx2x4 gen() 28078 MB/s Jan 24 00:41:15.682991 kernel: raid6: .... xor() 3672 MB/s, rmw enabled Jan 24 00:41:15.683279 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:41:15.711819 kernel: xor: automatically using best checksumming function avx Jan 24 00:41:15.942838 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:41:15.967286 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:41:16.000878 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:41:16.037067 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 24 00:41:16.050020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:41:16.076990 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:41:16.097286 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 24 00:41:16.152685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:41:16.196783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:41:16.311696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:41:16.343776 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:41:16.386567 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:41:16.409309 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:41:16.410334 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:41:16.427936 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:41:16.401661 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:41:16.460782 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:41:16.460817 kernel: GPT:9289727 != 19775487 Jan 24 00:41:16.460832 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:41:16.460846 kernel: GPT:9289727 != 19775487 Jan 24 00:41:16.460859 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:41:16.460872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:41:16.417565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:41:16.428069 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:41:16.486731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:41:16.503660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:41:16.504017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:41:16.512852 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:41:16.544288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:41:16.594011 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Jan 24 00:41:16.594090 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (467) Jan 24 00:41:16.547185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:16.550583 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:16.613728 kernel: libata version 3.00 loaded. Jan 24 00:41:16.616848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:16.652624 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:41:16.652699 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:41:16.653001 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:41:16.655122 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:41:16.661369 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:41:17.134843 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:41:17.135392 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:41:17.135703 kernel: AES CTR mode by8 optimization enabled Jan 24 00:41:17.135722 kernel: scsi host0: ahci Jan 24 00:41:17.135930 kernel: scsi host1: ahci Jan 24 00:41:17.136123 kernel: scsi host2: ahci Jan 24 00:41:17.138041 kernel: scsi host3: ahci Jan 24 00:41:17.138312 kernel: scsi host4: ahci Jan 24 00:41:17.138606 kernel: scsi host5: ahci Jan 24 00:41:17.138796 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:41:17.138813 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:41:17.138826 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:41:17.138840 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:41:17.138854 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:41:17.138873 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:41:17.138887 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:17.138901 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:17.138917 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:17.138932 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:41:17.138950 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:17.138967 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:41:17.138981 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:41:17.138998 kernel: ata3.00: applying bridge limits Jan 24 00:41:17.139016 kernel: ata3.00: configured for UDMA/100 Jan 24 00:41:17.139025 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:41:17.146937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:17.187429 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:41:17.188123 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:41:17.188079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:41:17.194995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:41:17.225945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:41:17.240814 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:41:17.287664 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:41:17.290031 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:41:17.292730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:41:17.326406 disk-uuid[567]: Primary Header is updated. Jan 24 00:41:17.326406 disk-uuid[567]: Secondary Entries is updated. Jan 24 00:41:17.326406 disk-uuid[567]: Secondary Header is updated. Jan 24 00:41:17.342532 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:41:17.357677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:41:17.359634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:41:18.406827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:41:18.409200 disk-uuid[570]: The operation has completed successfully. Jan 24 00:41:18.489083 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:41:18.490274 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:41:18.531073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:41:18.554840 sh[595]: Success Jan 24 00:41:18.638212 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:41:18.761648 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:41:18.775878 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:41:18.785086 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:41:18.840700 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:41:18.840824 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:18.851586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:41:18.851710 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:41:18.859994 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:41:18.905731 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:41:18.912058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:41:18.934816 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:41:18.943963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:41:18.976599 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:41:18.976643 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:18.985630 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:41:19.007796 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:41:19.029393 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:41:19.043363 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:41:19.081426 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:41:19.106606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:41:19.245252 ignition[709]: Ignition 2.19.0 Jan 24 00:41:19.245268 ignition[709]: Stage: fetch-offline Jan 24 00:41:19.245319 ignition[709]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:19.245335 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:19.254326 ignition[709]: parsed url from cmdline: "" Jan 24 00:41:19.254335 ignition[709]: no config URL provided Jan 24 00:41:19.254347 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:41:19.254374 ignition[709]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:41:19.254426 ignition[709]: op(1): [started] loading QEMU firmware config module Jan 24 00:41:19.254436 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:41:19.287642 ignition[709]: op(1): [finished] loading QEMU firmware config module Jan 24 00:41:19.287698 ignition[709]: QEMU firmware config was not found. Ignoring... Jan 24 00:41:19.332534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:41:19.356933 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:41:19.415117 systemd-networkd[783]: lo: Link UP Jan 24 00:41:19.415217 systemd-networkd[783]: lo: Gained carrier Jan 24 00:41:19.418033 systemd-networkd[783]: Enumeration completed Jan 24 00:41:19.419606 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:41:19.420586 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:41:19.420593 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:41:19.424331 systemd-networkd[783]: eth0: Link UP Jan 24 00:41:19.424337 systemd-networkd[783]: eth0: Gained carrier Jan 24 00:41:19.424348 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:41:19.430852 systemd[1]: Reached target network.target - Network. Jan 24 00:41:19.489889 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:41:19.839354 ignition[709]: parsing config with SHA512: 7d7564558720b6c865a1feab1380a9fd0ee114f3419ecf1958cc9a5645523ae278b18d8180e78a18ee19478d141bcb37c46def7ad3a085f611bb09f7f06d6856 Jan 24 00:41:19.844816 unknown[709]: fetched base config from "system" Jan 24 00:41:19.844829 unknown[709]: fetched user config from "qemu" Jan 24 00:41:19.845685 ignition[709]: fetch-offline: fetch-offline passed Jan 24 00:41:19.847957 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:41:19.845776 ignition[709]: Ignition finished successfully Jan 24 00:41:19.857009 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:41:19.883848 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:41:19.908142 ignition[787]: Ignition 2.19.0 Jan 24 00:41:19.908221 ignition[787]: Stage: kargs Jan 24 00:41:19.908383 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:19.914921 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:41:19.908395 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:19.909406 ignition[787]: kargs: kargs passed Jan 24 00:41:19.909580 ignition[787]: Ignition finished successfully Jan 24 00:41:19.938848 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:41:19.986675 ignition[796]: Ignition 2.19.0 Jan 24 00:41:19.986735 ignition[796]: Stage: disks Jan 24 00:41:19.991906 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:41:19.986959 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:19.998369 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:41:19.986974 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:20.006806 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:41:19.988274 ignition[796]: disks: disks passed Jan 24 00:41:20.019663 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:41:19.988347 ignition[796]: Ignition finished successfully Jan 24 00:41:20.025985 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:41:20.032258 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:41:20.064088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:41:20.120804 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:41:20.104774 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:41:20.137951 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:41:20.310569 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:41:20.312012 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:41:20.325424 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:41:20.351799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:41:20.369821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:41:20.399086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 24 00:41:20.399113 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:41:20.399124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:20.399134 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:41:20.399143 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:41:20.399744 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:41:20.399872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:41:20.423691 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:41:20.440758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:41:20.454737 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:41:20.484085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:41:20.549239 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:41:20.568326 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:41:20.583625 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:41:20.595345 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:41:20.765638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:41:20.789921 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:41:20.802771 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:41:20.808251 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:41:20.823756 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:41:20.861227 ignition[928]: INFO : Ignition 2.19.0 Jan 24 00:41:20.866326 ignition[928]: INFO : Stage: mount Jan 24 00:41:20.866326 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:20.866326 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:20.866326 ignition[928]: INFO : mount: mount passed Jan 24 00:41:20.866326 ignition[928]: INFO : Ignition finished successfully Jan 24 00:41:20.866818 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:41:20.874008 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:41:20.905900 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:41:20.920823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:41:20.950610 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Jan 24 00:41:20.965947 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:41:20.966004 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:41:20.966016 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:41:20.983649 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:41:20.986367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:41:21.034908 ignition[960]: INFO : Ignition 2.19.0 Jan 24 00:41:21.034908 ignition[960]: INFO : Stage: files Jan 24 00:41:21.044898 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:21.044898 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:21.060913 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:41:21.069372 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:41:21.069372 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:41:21.096739 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:41:21.105293 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:41:21.114250 unknown[960]: wrote ssh authorized keys file for user: core Jan 24 00:41:21.121536 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:41:21.135648 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:41:21.147900 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:41:21.186889 systemd-networkd[783]: eth0: Gained IPv6LL Jan 24 00:41:21.215218 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:41:21.324237 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:41:21.324237 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:41:21.349973 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:41:21.714331 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:41:22.339018 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:41:22.339018 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:41:22.360617 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:41:22.441594 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:41:22.441594 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:41:22.441594 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:41:22.441594 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:41:22.441594 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:41:22.441594 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:41:22.441594 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:41:22.441594 ignition[960]: INFO : files: files passed Jan 24 00:41:22.441594 ignition[960]: INFO : Ignition finished successfully Jan 24 00:41:22.395037 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:41:22.429885 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:41:22.442641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:41:22.558169 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:41:22.455557 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:41:22.571765 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:41:22.571765 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:41:22.455792 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:41:22.609019 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:41:22.476056 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:41:22.483723 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:41:22.533866 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:41:22.581386 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:41:22.581654 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:41:22.596355 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:41:22.609017 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:41:22.614307 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:41:22.641905 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:41:22.666431 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:41:22.698084 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:41:22.721698 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:41:22.733767 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:41:22.746049 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:41:22.755820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:41:22.760815 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:41:22.774155 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:41:22.784841 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:41:22.794423 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:41:22.806335 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:41:22.819636 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:41:22.831083 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:41:22.841919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:41:22.854819 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:41:22.866314 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:41:22.877639 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:41:22.886120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:41:22.891054 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:41:22.902760 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:41:22.913869 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:41:22.925748 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:41:22.930648 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:41:22.943884 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:41:22.948746 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:41:22.960890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:41:22.966828 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:41:22.978923 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:41:22.988258 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:41:22.989846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:41:23.006263 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:41:23.011069 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:41:23.020261 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:41:23.020421 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:41:23.029942 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:41:23.030137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:41:23.039409 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:41:23.039860 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:41:23.050571 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:41:23.050779 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:41:23.078812 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:41:23.102366 ignition[1014]: INFO : Ignition 2.19.0 Jan 24 00:41:23.102366 ignition[1014]: INFO : Stage: umount Jan 24 00:41:23.102366 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:41:23.102366 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:41:23.089650 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:41:23.171986 ignition[1014]: INFO : umount: umount passed Jan 24 00:41:23.171986 ignition[1014]: INFO : Ignition finished successfully Jan 24 00:41:23.097140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:41:23.097646 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:41:23.108712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:41:23.108942 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:41:23.121756 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:41:23.121962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:41:23.129265 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:41:23.130380 systemd[1]: Stopped target network.target - Network. Jan 24 00:41:23.138334 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:41:23.138429 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:41:23.149974 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:41:23.150052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:41:23.155536 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:41:23.155605 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:41:23.166545 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:41:23.166635 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:41:23.172130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:41:23.181358 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:41:23.192594 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 24 00:41:23.193228 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:41:23.193359 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:41:23.205401 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:41:23.205695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:41:23.215300 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:41:23.215576 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:41:23.226898 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:41:23.227103 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:41:23.241284 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:41:23.241333 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:41:23.246882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:41:23.246938 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:41:23.277757 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:41:23.282389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:41:23.282540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:41:23.288840 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:41:23.288899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:41:23.294073 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:41:23.294149 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:41:23.304355 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:41:23.304405 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:41:23.310947 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:41:23.577954 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Jan 24 00:41:23.338142 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:41:23.338538 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:41:23.348070 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:41:23.348384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:41:23.361537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:41:23.361626 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:41:23.371338 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:41:23.371407 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:41:23.376782 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:41:23.376855 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:41:23.382321 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:41:23.382404 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:41:23.391658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:41:23.391735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:41:23.419909 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:41:23.429436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:41:23.429728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:41:23.442012 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:41:23.442093 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:41:23.453055 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:41:23.453128 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:41:23.459296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:41:23.459348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:23.470999 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:41:23.471261 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:41:23.481852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:41:23.508800 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:41:23.525971 systemd[1]: Switching root. Jan 24 00:41:23.711057 systemd-journald[190]: Journal stopped Jan 24 00:41:25.517096 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:41:25.517193 kernel: SELinux: policy capability open_perms=1 Jan 24 00:41:25.517282 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:41:25.517302 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:41:25.517327 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:41:25.517344 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:41:25.517366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:41:25.517383 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:41:25.517399 kernel: audit: type=1403 audit(1769215283.806:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:41:25.517423 systemd[1]: Successfully loaded SELinux policy in 72.076ms. Jan 24 00:41:25.517588 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.872ms. Jan 24 00:41:25.517612 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:41:25.517632 systemd[1]: Detected virtualization kvm. Jan 24 00:41:25.517650 systemd[1]: Detected architecture x86-64. Jan 24 00:41:25.517673 systemd[1]: Detected first boot. Jan 24 00:41:25.517691 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:41:25.517709 zram_generator::config[1059]: No configuration found. Jan 24 00:41:25.517727 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:41:25.517745 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:41:25.517764 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:41:25.517782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:41:25.517808 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:41:25.517831 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:41:25.517852 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:41:25.517871 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:41:25.517888 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:41:25.517906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:41:25.517924 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:41:25.517942 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:41:25.517960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:41:25.517978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:41:25.518001 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:41:25.518020 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:41:25.518038 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:41:25.518056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:41:25.518075 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:41:25.518093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:41:25.518111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:41:25.518130 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:41:25.518153 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:41:25.518170 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:41:25.518186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:41:25.518283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:41:25.518305 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:41:25.518326 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:41:25.518344 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:41:25.518362 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:41:25.518385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:41:25.518404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:41:25.518423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:41:25.518441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:41:25.521397 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:41:25.521411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:41:25.521423 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:41:25.521434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:25.521521 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:41:25.521540 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:41:25.521551 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:41:25.521562 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:41:25.521573 systemd[1]: Reached target machines.target - Containers. Jan 24 00:41:25.521584 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:41:25.521595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:41:25.521606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:41:25.521616 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:41:25.521627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:41:25.521641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:41:25.521652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:41:25.521664 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:41:25.521675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:41:25.521686 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:41:25.521696 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:41:25.521707 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:41:25.521717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:41:25.521731 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:41:25.521742 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:41:25.521753 kernel: fuse: init (API version 7.39) Jan 24 00:41:25.521763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:41:25.521774 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:41:25.521785 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:41:25.521824 systemd-journald[1126]: Collecting audit messages is disabled. Jan 24 00:41:25.521849 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:41:25.521860 systemd-journald[1126]: Journal started Jan 24 00:41:25.521882 systemd-journald[1126]: Runtime Journal (/run/log/journal/4a46aca457d14808b39c50e4a229d528) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:41:24.603837 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:41:24.625862 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:41:24.626946 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:41:24.627591 systemd[1]: systemd-journald.service: Consumed 2.741s CPU time. Jan 24 00:41:25.530356 kernel: loop: module loaded Jan 24 00:41:25.541829 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:41:25.541893 systemd[1]: Stopped verity-setup.service. Jan 24 00:41:25.560583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:25.568000 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:41:25.573285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:41:25.579434 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:41:25.586682 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:41:25.591381 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:41:25.597094 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:41:25.602969 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:41:25.608985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:41:25.615989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:41:25.616344 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:41:25.622396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:41:25.624639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:41:25.632174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:41:25.632548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:41:25.643045 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:41:25.643419 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:41:25.649359 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:41:25.649642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:41:25.656569 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:41:25.662948 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:41:25.669154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:41:25.675843 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:41:25.682280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:41:25.702121 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:41:25.740637 kernel: ACPI: bus type drm_connector registered Jan 24 00:41:25.746184 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:41:25.756191 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:41:25.761615 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:41:25.761661 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:41:25.767980 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:41:25.777187 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:41:25.784640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:41:25.788983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:41:25.791741 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:41:25.799137 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:41:25.804692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:41:25.807431 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:41:25.813024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:41:25.815546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:41:25.818904 systemd-journald[1126]: Time spent on flushing to /var/log/journal/4a46aca457d14808b39c50e4a229d528 is 39.653ms for 941 entries. Jan 24 00:41:25.818904 systemd-journald[1126]: System Journal (/var/log/journal/4a46aca457d14808b39c50e4a229d528) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:41:25.890854 systemd-journald[1126]: Received client request to flush runtime journal. Jan 24 00:41:25.890933 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:41:25.829418 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:41:25.839639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:41:25.851817 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:41:25.862783 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:41:25.865361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:41:25.882348 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:41:25.890346 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:41:25.900986 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:41:25.915823 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:41:25.925952 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:41:25.936856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:41:25.949627 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:41:25.956293 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 24 00:41:25.956318 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 24 00:41:25.962870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:41:25.979071 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:41:25.986415 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:41:25.996000 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:41:26.004351 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:41:26.016181 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:41:26.027307 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:41:26.028356 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:41:26.079166 kernel: loop2: detected capacity change from 0 to 224512 Jan 24 00:41:26.068319 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:41:26.089062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:41:26.126554 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jan 24 00:41:26.127079 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jan 24 00:41:26.134597 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:41:26.138317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:41:26.164546 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:41:26.187940 kernel: loop5: detected capacity change from 0 to 224512 Jan 24 00:41:26.208329 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:41:26.209037 (sd-merge)[1202]: Merged extensions into '/usr'. Jan 24 00:41:26.216114 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:41:26.216134 systemd[1]: Reloading... Jan 24 00:41:26.313508 zram_generator::config[1228]: No configuration found. Jan 24 00:41:26.392175 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:41:26.465974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:41:26.541098 systemd[1]: Reloading finished in 324 ms. Jan 24 00:41:26.584978 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:41:26.592784 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:41:26.618045 systemd[1]: Starting ensure-sysext.service... Jan 24 00:41:26.625167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:41:26.635885 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:41:26.635953 systemd[1]: Reloading... Jan 24 00:41:26.664113 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:41:26.664847 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:41:26.666718 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:41:26.667165 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 24 00:41:26.667767 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 24 00:41:26.673913 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:41:26.673975 systemd-tmpfiles[1267]: Skipping /boot Jan 24 00:41:26.695007 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:41:26.695028 systemd-tmpfiles[1267]: Skipping /boot Jan 24 00:41:26.718636 zram_generator::config[1294]: No configuration found. Jan 24 00:41:26.864363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:41:26.932182 systemd[1]: Reloading finished in 295 ms. Jan 24 00:41:26.959075 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:41:26.978440 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:41:27.011797 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:41:27.022069 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:41:27.032423 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:41:27.043298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:41:27.051745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:41:27.062766 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:41:27.076994 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:41:27.086066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:27.086407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:41:27.095609 augenrules[1354]: No rules Jan 24 00:41:27.089657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:41:27.094825 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jan 24 00:41:27.100887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:41:27.112794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:41:27.119741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:41:27.119860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:27.121959 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:41:27.130607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:41:27.130799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:41:27.139366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:41:27.139675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:41:27.148840 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:41:27.157346 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:41:27.170930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:41:27.178578 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:41:27.183772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:41:27.211965 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:41:27.230030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1381) Jan 24 00:41:27.251635 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:41:27.253587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:27.253968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:41:27.267991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:41:27.285860 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:41:27.309977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:41:27.320413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:41:27.327106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:41:27.332056 systemd-resolved[1348]: Positive Trust Anchors: Jan 24 00:41:27.332060 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:41:27.333584 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:41:27.333635 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:41:27.340778 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:41:27.343899 systemd-resolved[1348]: Defaulting to hostname 'linux'. Jan 24 00:41:27.345315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:41:27.347045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:41:27.355538 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:41:27.363536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:41:27.364017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:41:27.371004 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:41:27.371391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:41:27.379044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:41:27.379438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:41:27.387921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:41:27.388353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:41:27.396061 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:41:27.404211 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:41:27.414322 systemd[1]: Finished ensure-sysext.service. Jan 24 00:41:27.416580 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:41:27.446075 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:41:27.446665 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:41:27.446958 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:41:27.451926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:41:27.464873 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:41:27.482662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:41:27.488211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:41:27.488350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:41:27.492979 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:41:27.499047 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:41:27.517530 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:41:27.525019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:41:27.536877 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:41:27.602651 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:41:27.602625 systemd-networkd[1405]: lo: Link UP Jan 24 00:41:27.602630 systemd-networkd[1405]: lo: Gained carrier Jan 24 00:41:27.604743 systemd-networkd[1405]: Enumeration completed Jan 24 00:41:27.604944 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:41:27.605175 systemd[1]: Reached target network.target - Network. Jan 24 00:41:27.606321 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:41:27.606363 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:41:27.607923 systemd-networkd[1405]: eth0: Link UP Jan 24 00:41:27.607976 systemd-networkd[1405]: eth0: Gained carrier Jan 24 00:41:27.607990 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:41:27.624973 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:41:27.625386 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:41:27.628721 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:41:27.658085 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:41:27.665720 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Jan 24 00:41:28.160595 systemd-resolved[1348]: Clock change detected. Flushing caches. Jan 24 00:41:28.161477 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:41:28.161788 systemd-timesyncd[1416]: Initial clock synchronization to Sat 2026-01-24 00:41:28.160127 UTC. Jan 24 00:41:28.302083 kernel: kvm_amd: TSC scaling supported Jan 24 00:41:28.302258 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:41:28.302395 kernel: kvm_amd: Nested Paging enabled Jan 24 00:41:28.302472 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:41:28.302547 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:41:28.410115 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:41:28.440994 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:41:28.542389 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:41:28.552278 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:41:28.565560 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:41:28.601315 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:41:28.611859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:41:28.620169 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:41:28.628437 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:41:28.637320 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:41:28.645834 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:41:28.653325 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:41:28.661846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:41:28.670546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:41:28.670587 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:41:28.676669 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:41:28.687108 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:41:28.696505 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:41:28.715509 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:41:28.725417 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:41:28.733470 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:41:28.740192 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:41:28.741396 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:41:28.745178 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:41:28.749459 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:41:28.749536 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:41:28.751630 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:41:28.759111 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:41:28.766615 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:41:28.776323 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:41:28.780301 jq[1438]: false Jan 24 00:41:28.781535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:41:28.800412 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:41:28.810208 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:41:28.818105 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:41:28.823800 extend-filesystems[1439]: Found loop3 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found loop4 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found loop5 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found sr0 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda1 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda2 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda3 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found usr Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda4 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda6 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda7 Jan 24 00:41:28.828148 extend-filesystems[1439]: Found vda9 Jan 24 00:41:28.828148 extend-filesystems[1439]: Checking size of /dev/vda9 Jan 24 00:41:28.984747 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:41:28.984810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1382) Jan 24 00:41:28.984838 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:41:28.828129 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:41:28.825444 dbus-daemon[1437]: [system] SELinux support is enabled Jan 24 00:41:28.985483 extend-filesystems[1439]: Resized partition /dev/vda9 Jan 24 00:41:28.833083 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:41:28.994165 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:41:28.994165 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:41:28.994165 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:41:28.994165 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:41:28.843117 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:41:29.021105 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jan 24 00:41:28.844056 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:41:29.026493 update_engine[1451]: I20260124 00:41:28.944490 1451 main.cc:92] Flatcar Update Engine starting Jan 24 00:41:29.026493 update_engine[1451]: I20260124 00:41:28.950439 1451 update_check_scheduler.cc:74] Next update check in 10m24s Jan 24 00:41:28.847191 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:41:29.027299 jq[1457]: true Jan 24 00:41:28.899322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:41:28.908316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:41:28.919216 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:41:28.953652 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:41:29.028131 tar[1464]: linux-amd64/LICENSE Jan 24 00:41:29.028131 tar[1464]: linux-amd64/helm Jan 24 00:41:28.954090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:41:28.954567 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:41:29.029121 jq[1466]: true Jan 24 00:41:28.954853 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:41:28.963444 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:41:28.963815 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:41:28.974316 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:41:28.974622 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:41:29.009495 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:41:29.042147 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:41:29.042179 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:41:29.046346 systemd-logind[1447]: New seat seat0. Jan 24 00:41:29.053761 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:41:29.070370 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:41:29.077467 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:41:29.077625 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:41:29.087753 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:41:29.087882 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:41:29.107125 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:41:29.153745 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:41:29.155210 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:41:29.163219 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:41:29.187021 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:41:29.221458 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:41:29.265567 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:41:29.286453 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:41:29.287344 containerd[1467]: time="2026-01-24T00:41:29.287142896Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:41:29.309838 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:41:29.310346 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:41:29.326003 containerd[1467]: time="2026-01-24T00:41:29.324057047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.326751 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332108776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332146186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332164831Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332335169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332349496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332411652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332423193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332599071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332613268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332624810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334021 containerd[1467]: time="2026-01-24T00:41:29.332633415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334360 containerd[1467]: time="2026-01-24T00:41:29.332792743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334360 containerd[1467]: time="2026-01-24T00:41:29.333416337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334360 containerd[1467]: time="2026-01-24T00:41:29.333538675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:41:29.334360 containerd[1467]: time="2026-01-24T00:41:29.333551930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:41:29.334360 containerd[1467]: time="2026-01-24T00:41:29.333641547Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:41:29.334360 containerd[1467]: time="2026-01-24T00:41:29.333756341Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:41:29.347208 containerd[1467]: time="2026-01-24T00:41:29.347118319Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:41:29.347295 containerd[1467]: time="2026-01-24T00:41:29.347254403Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:41:29.347295 containerd[1467]: time="2026-01-24T00:41:29.347284630Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:41:29.347418 containerd[1467]: time="2026-01-24T00:41:29.347309476Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:41:29.347418 containerd[1467]: time="2026-01-24T00:41:29.347331787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:41:29.347730 containerd[1467]: time="2026-01-24T00:41:29.347538303Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:41:29.350428 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:41:29.350582 containerd[1467]: time="2026-01-24T00:41:29.350546530Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358103246Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358157667Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358179328Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358200267Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358219572Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358236564Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358256732Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358278913Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358296346Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358314981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358332033Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358359223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358379972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360264 containerd[1467]: time="2026-01-24T00:41:29.358398517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358416831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358434534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358454311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358470171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358489367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358509664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358529321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358546723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358561942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358578272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358609631Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358637703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358656038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.360756 containerd[1467]: time="2026-01-24T00:41:29.358671226Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.358794135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.358826255Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.358845511Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.358865578Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.358879394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.358994720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.359016931Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:41:29.361303 containerd[1467]: time="2026-01-24T00:41:29.359036147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:41:29.361512 containerd[1467]: time="2026-01-24T00:41:29.359369299Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:41:29.361512 containerd[1467]: time="2026-01-24T00:41:29.359452093Z" level=info msg="Connect containerd service" Jan 24 00:41:29.361512 containerd[1467]: time="2026-01-24T00:41:29.359558853Z" level=info msg="using legacy CRI server" Jan 24 00:41:29.361512 containerd[1467]: time="2026-01-24T00:41:29.359572077Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:41:29.361512 containerd[1467]: time="2026-01-24T00:41:29.361009007Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:41:29.362003 containerd[1467]: time="2026-01-24T00:41:29.361879852Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:41:29.362569 containerd[1467]: time="2026-01-24T00:41:29.362462112Z" level=info msg="Start subscribing containerd event" Jan 24 00:41:29.363111 containerd[1467]: time="2026-01-24T00:41:29.363028128Z" level=info msg="Start recovering state" Jan 24 00:41:29.363157 containerd[1467]: time="2026-01-24T00:41:29.362608812Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:41:29.363357 containerd[1467]: time="2026-01-24T00:41:29.363184067Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:41:29.364028 containerd[1467]: time="2026-01-24T00:41:29.364006114Z" level=info msg="Start event monitor" Jan 24 00:41:29.364416 containerd[1467]: time="2026-01-24T00:41:29.364134884Z" level=info msg="Start snapshots syncer" Jan 24 00:41:29.364416 containerd[1467]: time="2026-01-24T00:41:29.364158548Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:41:29.364416 containerd[1467]: time="2026-01-24T00:41:29.364169759Z" level=info msg="Start streaming server" Jan 24 00:41:29.364416 containerd[1467]: time="2026-01-24T00:41:29.364251331Z" level=info msg="containerd successfully booted in 0.078904s" Jan 24 00:41:29.367593 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:41:29.376107 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:41:29.382112 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:41:29.388212 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:41:29.598271 tar[1464]: linux-amd64/README.md Jan 24 00:41:29.619256 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:41:30.060161 systemd-networkd[1405]: eth0: Gained IPv6LL Jan 24 00:41:30.064790 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:41:30.074082 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:41:30.095422 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:41:30.105768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:41:30.116106 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:41:30.185143 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:41:30.200169 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:41:30.201047 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:41:30.210839 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:41:30.962339 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:41:30.992371 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:60466.service - OpenSSH per-connection server daemon (10.0.0.1:60466). Jan 24 00:41:31.250526 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 60466 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:31.268844 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:31.299096 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:41:31.466779 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:41:31.477021 systemd-logind[1447]: New session 1 of user core. Jan 24 00:41:31.530396 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:41:31.553457 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:41:31.628571 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:41:31.915606 kernel: hrtimer: interrupt took 4856034 ns Jan 24 00:41:32.193043 systemd[1551]: Queued start job for default target default.target. Jan 24 00:41:32.215583 systemd[1551]: Created slice app.slice - User Application Slice. Jan 24 00:41:32.215685 systemd[1551]: Reached target paths.target - Paths. Jan 24 00:41:32.215777 systemd[1551]: Reached target timers.target - Timers. Jan 24 00:41:32.222620 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:41:32.329809 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:41:32.331873 systemd[1551]: Reached target sockets.target - Sockets. Jan 24 00:41:32.332058 systemd[1551]: Reached target basic.target - Basic System. Jan 24 00:41:32.332134 systemd[1551]: Reached target default.target - Main User Target. Jan 24 00:41:32.332245 systemd[1551]: Startup finished in 669ms. Jan 24 00:41:32.332471 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:41:32.364032 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:41:32.476866 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:60470.service - OpenSSH per-connection server daemon (10.0.0.1:60470). Jan 24 00:41:32.585249 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 60470 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:32.586395 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:32.595422 systemd-logind[1447]: New session 2 of user core. Jan 24 00:41:32.601213 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:41:32.798036 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:32.830824 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:60470.service: Deactivated successfully. Jan 24 00:41:32.837561 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:41:32.840259 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:41:32.883107 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:60474.service - OpenSSH per-connection server daemon (10.0.0.1:60474). Jan 24 00:41:32.894573 systemd-logind[1447]: Removed session 2. Jan 24 00:41:32.930631 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 60474 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:32.932449 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:32.940593 systemd-logind[1447]: New session 3 of user core. Jan 24 00:41:32.950619 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:41:33.037241 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:33.048318 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:60474.service: Deactivated successfully. Jan 24 00:41:33.060605 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:41:33.065162 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:41:33.068491 systemd-logind[1447]: Removed session 3. Jan 24 00:41:34.833228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:41:34.834116 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:41:34.836196 systemd[1]: Startup finished in 2.586s (kernel) + 9.881s (initrd) + 10.612s (userspace) = 23.081s. Jan 24 00:41:34.861999 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:41:37.108013 kubelet[1584]: E0124 00:41:37.107628 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:41:37.113729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:41:37.114189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:41:37.114879 systemd[1]: kubelet.service: Consumed 6.332s CPU time. Jan 24 00:41:43.055336 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:45918.service - OpenSSH per-connection server daemon (10.0.0.1:45918). Jan 24 00:41:43.106038 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 45918 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:43.109159 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:43.118181 systemd-logind[1447]: New session 4 of user core. Jan 24 00:41:43.128205 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:41:43.194757 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:43.206230 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:45918.service: Deactivated successfully. Jan 24 00:41:43.209396 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:41:43.211662 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:41:43.223609 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). Jan 24 00:41:43.226437 systemd-logind[1447]: Removed session 4. Jan 24 00:41:43.268513 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:43.271335 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:43.280136 systemd-logind[1447]: New session 5 of user core. Jan 24 00:41:43.294227 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:41:43.354683 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:43.371379 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:45928.service: Deactivated successfully. Jan 24 00:41:43.373745 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:41:43.376398 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:41:43.386481 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:45942.service - OpenSSH per-connection server daemon (10.0.0.1:45942). Jan 24 00:41:43.388404 systemd-logind[1447]: Removed session 5. Jan 24 00:41:43.430304 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 45942 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:43.433010 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:43.440884 systemd-logind[1447]: New session 6 of user core. Jan 24 00:41:43.459214 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:41:43.528577 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:43.543668 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:45942.service: Deactivated successfully. Jan 24 00:41:43.545870 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:41:43.550687 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:41:43.563710 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:45944.service - OpenSSH per-connection server daemon (10.0.0.1:45944). Jan 24 00:41:43.565534 systemd-logind[1447]: Removed session 6. Jan 24 00:41:43.609338 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 45944 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:43.611450 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:43.618716 systemd-logind[1447]: New session 7 of user core. Jan 24 00:41:43.628168 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:41:43.704030 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:41:43.704412 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:41:43.732183 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 24 00:41:43.735194 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:43.749186 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:45944.service: Deactivated successfully. Jan 24 00:41:43.751496 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:41:43.754576 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:41:43.765607 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:45952.service - OpenSSH per-connection server daemon (10.0.0.1:45952). Jan 24 00:41:43.767550 systemd-logind[1447]: Removed session 7. Jan 24 00:41:43.803396 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 45952 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:43.805450 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:43.812432 systemd-logind[1447]: New session 8 of user core. Jan 24 00:41:43.830293 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:41:43.896759 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:41:43.897521 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:41:43.905476 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 24 00:41:43.914574 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:41:43.915132 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:41:43.945399 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:41:43.949766 auditctl[1630]: No rules Jan 24 00:41:43.950523 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:41:43.951153 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:41:43.955570 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:41:44.033630 augenrules[1648]: No rules Jan 24 00:41:44.035752 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:41:44.038355 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 24 00:41:44.043197 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 24 00:41:44.058587 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:45952.service: Deactivated successfully. Jan 24 00:41:44.061310 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:41:44.064120 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:41:44.073641 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:45968.service - OpenSSH per-connection server daemon (10.0.0.1:45968). Jan 24 00:41:44.076067 systemd-logind[1447]: Removed session 8. Jan 24 00:41:44.130136 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 45968 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:41:44.134503 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:41:44.147790 systemd-logind[1447]: New session 9 of user core. Jan 24 00:41:44.154391 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:41:44.226706 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:41:44.227508 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:41:44.709601 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:41:44.709993 (dockerd)[1678]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:41:45.199350 dockerd[1678]: time="2026-01-24T00:41:45.199173747Z" level=info msg="Starting up" Jan 24 00:41:45.387019 dockerd[1678]: time="2026-01-24T00:41:45.386283040Z" level=info msg="Loading containers: start." Jan 24 00:41:45.702025 kernel: Initializing XFRM netlink socket Jan 24 00:41:45.944777 systemd-networkd[1405]: docker0: Link UP Jan 24 00:41:46.001598 dockerd[1678]: time="2026-01-24T00:41:46.001393424Z" level=info msg="Loading containers: done." Jan 24 00:41:46.030393 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2864721489-merged.mount: Deactivated successfully. Jan 24 00:41:46.037438 dockerd[1678]: time="2026-01-24T00:41:46.037298221Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:41:46.037571 dockerd[1678]: time="2026-01-24T00:41:46.037550581Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:41:46.038077 dockerd[1678]: time="2026-01-24T00:41:46.037733473Z" level=info msg="Daemon has completed initialization" Jan 24 00:41:46.131727 dockerd[1678]: time="2026-01-24T00:41:46.131600901Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:41:46.134111 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:41:47.083494 containerd[1467]: time="2026-01-24T00:41:47.083314516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:41:47.304781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:41:47.314477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:41:47.768168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296198668.mount: Deactivated successfully. Jan 24 00:41:48.248387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:41:48.249439 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:41:48.364351 kubelet[1848]: E0124 00:41:48.364223 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:41:48.373224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:41:48.373487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:41:48.374156 systemd[1]: kubelet.service: Consumed 1.036s CPU time. Jan 24 00:41:53.914171 containerd[1467]: time="2026-01-24T00:41:53.913883306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:41:53.915568 containerd[1467]: time="2026-01-24T00:41:53.915420696Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:41:53.918076 containerd[1467]: time="2026-01-24T00:41:53.918014657Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:41:53.922586 containerd[1467]: time="2026-01-24T00:41:53.922430660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:41:53.924238 containerd[1467]: time="2026-01-24T00:41:53.924137377Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 6.840718315s" Jan 24 00:41:53.924492 containerd[1467]: time="2026-01-24T00:41:53.924230200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:41:53.926326 containerd[1467]: time="2026-01-24T00:41:53.926254358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:41:56.636858 containerd[1467]: time="2026-01-24T00:41:56.636591260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:41:56.638705 containerd[1467]: time="2026-01-24T00:41:56.638370962Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:41:56.640103 containerd[1467]: time="2026-01-24T00:41:56.640058134Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:41:56.645542 containerd[1467]: time="2026-01-24T00:41:56.645453171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:41:56.646633 containerd[1467]: time="2026-01-24T00:41:56.646560440Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.72021882s" Jan 24 00:41:56.646684 containerd[1467]: time="2026-01-24T00:41:56.646644207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:41:56.648347 containerd[1467]: time="2026-01-24T00:41:56.648260774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:41:58.668792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:41:58.686550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:00.443411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:00.708868 (kubelet)[1916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:01.088338 containerd[1467]: time="2026-01-24T00:42:01.088161050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:01.092712 containerd[1467]: time="2026-01-24T00:42:01.092591268Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:42:01.095492 containerd[1467]: time="2026-01-24T00:42:01.095422442Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:01.101551 containerd[1467]: time="2026-01-24T00:42:01.101420305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:01.105197 containerd[1467]: time="2026-01-24T00:42:01.104461966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 4.456113538s" Jan 24 00:42:01.105197 containerd[1467]: time="2026-01-24T00:42:01.104505126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:42:01.105523 containerd[1467]: time="2026-01-24T00:42:01.105439243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:42:01.509640 kubelet[1916]: E0124 00:42:01.508853 1916 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:01.514782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:01.515295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:01.516257 systemd[1]: kubelet.service: Consumed 2.885s CPU time. Jan 24 00:42:04.840131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201165126.mount: Deactivated successfully. Jan 24 00:42:10.045277 containerd[1467]: time="2026-01-24T00:42:10.044428285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:10.064714 containerd[1467]: time="2026-01-24T00:42:10.050774760Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:42:10.068740 containerd[1467]: time="2026-01-24T00:42:10.068533408Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:10.075480 containerd[1467]: time="2026-01-24T00:42:10.075319643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:10.078539 containerd[1467]: time="2026-01-24T00:42:10.078371246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 8.972850352s" Jan 24 00:42:10.079736 containerd[1467]: time="2026-01-24T00:42:10.079611073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:42:10.083110 containerd[1467]: time="2026-01-24T00:42:10.083070706Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:42:11.177805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92310436.mount: Deactivated successfully. Jan 24 00:42:11.567489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:42:11.607700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:12.739813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:12.792337 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:13.784053 update_engine[1451]: I20260124 00:42:13.783044 1451 update_attempter.cc:509] Updating boot flags... Jan 24 00:42:13.813035 kubelet[1953]: E0124 00:42:13.810263 1953 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:13.818156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:13.818724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:13.820341 systemd[1]: kubelet.service: Consumed 2.161s CPU time. Jan 24 00:42:13.874810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1979) Jan 24 00:42:14.149157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1982) Jan 24 00:42:16.694539 containerd[1467]: time="2026-01-24T00:42:16.694318340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:16.697062 containerd[1467]: time="2026-01-24T00:42:16.696667658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:42:16.698530 containerd[1467]: time="2026-01-24T00:42:16.698460893Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:16.705280 containerd[1467]: time="2026-01-24T00:42:16.705022485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:16.706989 containerd[1467]: time="2026-01-24T00:42:16.706666482Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 6.623471696s" Jan 24 00:42:16.706989 containerd[1467]: time="2026-01-24T00:42:16.706716385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:42:16.707689 containerd[1467]: time="2026-01-24T00:42:16.707666352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:42:17.211803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353490736.mount: Deactivated successfully. Jan 24 00:42:17.228661 containerd[1467]: time="2026-01-24T00:42:17.228441379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:17.230633 containerd[1467]: time="2026-01-24T00:42:17.230445081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:42:17.232559 containerd[1467]: time="2026-01-24T00:42:17.232407036Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:17.240067 containerd[1467]: time="2026-01-24T00:42:17.239566058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:17.242254 containerd[1467]: time="2026-01-24T00:42:17.241782365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.004425ms" Jan 24 00:42:17.242254 containerd[1467]: time="2026-01-24T00:42:17.241819274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:42:17.243424 containerd[1467]: time="2026-01-24T00:42:17.243305684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:42:17.865203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101126269.mount: Deactivated successfully. Jan 24 00:42:24.095382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:42:24.118343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:24.855770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:24.917479 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:42:25.405636 kubelet[2083]: E0124 00:42:25.405323 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:42:25.441261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:42:25.498315 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:42:25.522162 systemd[1]: kubelet.service: Consumed 1.351s CPU time. Jan 24 00:42:25.924752 containerd[1467]: time="2026-01-24T00:42:25.924522139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:25.927225 containerd[1467]: time="2026-01-24T00:42:25.927078052Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:42:25.930204 containerd[1467]: time="2026-01-24T00:42:25.929828042Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:25.934572 containerd[1467]: time="2026-01-24T00:42:25.934210566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:25.935579 containerd[1467]: time="2026-01-24T00:42:25.935387847Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.692048069s" Jan 24 00:42:25.935579 containerd[1467]: time="2026-01-24T00:42:25.935467756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:42:29.426328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:29.426672 systemd[1]: kubelet.service: Consumed 1.351s CPU time. Jan 24 00:42:29.439442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:29.479818 systemd[1]: Reloading requested from client PID 2124 ('systemctl') (unit session-9.scope)... Jan 24 00:42:29.480003 systemd[1]: Reloading... Jan 24 00:42:29.614515 zram_generator::config[2163]: No configuration found. Jan 24 00:42:29.782677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:42:29.893475 systemd[1]: Reloading finished in 412 ms. Jan 24 00:42:29.967696 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:42:29.967790 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:42:29.968241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:29.971983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:30.211811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:30.222400 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:42:30.419187 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:42:30.419187 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:42:30.419187 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:42:30.420135 kubelet[2212]: I0124 00:42:30.419510 2212 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:42:31.315375 kubelet[2212]: I0124 00:42:31.314960 2212 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:42:31.315375 kubelet[2212]: I0124 00:42:31.315224 2212 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:42:31.317340 kubelet[2212]: I0124 00:42:31.317241 2212 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:42:31.405596 kubelet[2212]: E0124 00:42:31.405549 2212 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:31.407280 kubelet[2212]: I0124 00:42:31.407101 2212 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:42:31.446715 kubelet[2212]: E0124 00:42:31.446469 2212 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:42:31.446715 kubelet[2212]: I0124 00:42:31.446540 2212 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:42:31.467433 kubelet[2212]: I0124 00:42:31.467375 2212 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:42:31.468549 kubelet[2212]: I0124 00:42:31.468414 2212 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:42:31.469439 kubelet[2212]: I0124 00:42:31.468526 2212 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:42:31.469638 kubelet[2212]: I0124 00:42:31.469536 2212 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:42:31.469638 kubelet[2212]: I0124 00:42:31.469550 2212 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:42:31.470178 kubelet[2212]: I0124 00:42:31.470096 2212 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:42:31.473618 kubelet[2212]: I0124 00:42:31.473526 2212 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:42:31.473678 kubelet[2212]: I0124 00:42:31.473653 2212 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:42:31.474054 kubelet[2212]: I0124 00:42:31.473795 2212 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:42:31.474054 kubelet[2212]: I0124 00:42:31.473984 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:42:31.490582 kubelet[2212]: I0124 00:42:31.490560 2212 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:42:31.495495 kubelet[2212]: I0124 00:42:31.493129 2212 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:42:31.495495 kubelet[2212]: W0124 00:42:31.493267 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:31.495495 kubelet[2212]: E0124 00:42:31.493621 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:31.497651 kubelet[2212]: W0124 00:42:31.497506 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:31.497827 kubelet[2212]: E0124 00:42:31.497737 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:31.502645 kubelet[2212]: W0124 00:42:31.498302 2212 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:42:31.529431 kubelet[2212]: I0124 00:42:31.528806 2212 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:42:31.530720 kubelet[2212]: I0124 00:42:31.530623 2212 server.go:1287] "Started kubelet" Jan 24 00:42:31.576690 kubelet[2212]: I0124 00:42:31.572611 2212 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:42:31.585657 kubelet[2212]: I0124 00:42:31.582476 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:42:31.594609 kubelet[2212]: E0124 00:42:31.585413 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d83fd0f1861d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:42:31.529693649 +0000 UTC m=+1.296558964,LastTimestamp:2026-01-24 00:42:31.529693649 +0000 UTC m=+1.296558964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:42:31.596981 kubelet[2212]: I0124 00:42:31.595464 2212 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:42:31.596981 kubelet[2212]: I0124 00:42:31.596115 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:42:31.597407 kubelet[2212]: I0124 00:42:31.597324 2212 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:42:31.598814 kubelet[2212]: I0124 00:42:31.598658 2212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:42:31.600198 kubelet[2212]: I0124 00:42:31.600023 2212 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:42:31.601223 kubelet[2212]: E0124 00:42:31.601136 2212 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:42:31.606127 kubelet[2212]: E0124 00:42:31.605807 2212 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:42:31.611775 kubelet[2212]: I0124 00:42:31.611506 2212 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:42:31.613440 kubelet[2212]: W0124 00:42:31.612595 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:31.613440 kubelet[2212]: E0124 00:42:31.612788 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:31.613440 kubelet[2212]: E0124 00:42:31.613228 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Jan 24 00:42:31.613440 kubelet[2212]: I0124 00:42:31.613391 2212 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:42:31.623361 kubelet[2212]: I0124 00:42:31.623042 2212 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:42:31.625701 kubelet[2212]: I0124 00:42:31.623832 2212 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:42:31.628738 kubelet[2212]: I0124 00:42:31.628361 2212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:42:31.722665 kubelet[2212]: E0124 00:42:31.722358 2212 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:42:31.787758 kubelet[2212]: I0124 00:42:31.787518 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:42:31.794547 kubelet[2212]: I0124 00:42:31.794422 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:42:31.794722 kubelet[2212]: I0124 00:42:31.794659 2212 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:42:31.794781 kubelet[2212]: I0124 00:42:31.794762 2212 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:42:31.795129 kubelet[2212]: I0124 00:42:31.794849 2212 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:42:31.795396 kubelet[2212]: E0124 00:42:31.795126 2212 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:42:31.798367 kubelet[2212]: W0124 00:42:31.798274 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:31.798367 kubelet[2212]: E0124 00:42:31.798323 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:31.811788 kubelet[2212]: I0124 00:42:31.811442 2212 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:42:31.811788 kubelet[2212]: I0124 00:42:31.811556 2212 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:42:31.811788 kubelet[2212]: I0124 00:42:31.811633 2212 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:42:31.815106 kubelet[2212]: E0124 00:42:31.814683 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Jan 24 00:42:31.816319 kubelet[2212]: I0124 00:42:31.816199 2212 policy_none.go:49] "None policy: Start" Jan 24 00:42:31.816467 kubelet[2212]: I0124 00:42:31.816388 2212 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:42:31.816497 kubelet[2212]: I0124 00:42:31.816466 2212 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:42:31.822769 kubelet[2212]: E0124 00:42:31.822635 2212 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:42:31.830246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:42:31.863691 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:42:31.869642 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:42:31.897438 kubelet[2212]: E0124 00:42:31.897390 2212 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:42:31.900718 kubelet[2212]: I0124 00:42:31.900575 2212 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:42:31.901636 kubelet[2212]: I0124 00:42:31.901422 2212 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:42:31.901747 kubelet[2212]: I0124 00:42:31.901557 2212 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:42:31.902785 kubelet[2212]: I0124 00:42:31.902498 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:42:31.906059 kubelet[2212]: E0124 00:42:31.905823 2212 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:42:31.906139 kubelet[2212]: E0124 00:42:31.906031 2212 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:42:32.005320 kubelet[2212]: I0124 00:42:32.005073 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:32.006517 kubelet[2212]: E0124 00:42:32.006415 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 24 00:42:32.133086 kubelet[2212]: I0124 00:42:32.132136 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:32.222769 kubelet[2212]: I0124 00:42:32.222463 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:32.225038 kubelet[2212]: E0124 00:42:32.224833 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Jan 24 00:42:32.225038 kubelet[2212]: E0124 00:42:32.224991 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 24 00:42:32.232019 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 24 00:42:32.235366 kubelet[2212]: I0124 00:42:32.235233 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/706a3df22634a4f610883fdebba90315-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"706a3df22634a4f610883fdebba90315\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:32.235366 kubelet[2212]: I0124 00:42:32.235345 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:32.235366 kubelet[2212]: I0124 00:42:32.235496 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/706a3df22634a4f610883fdebba90315-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"706a3df22634a4f610883fdebba90315\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:32.235366 kubelet[2212]: I0124 00:42:32.235514 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:32.236472 kubelet[2212]: I0124 00:42:32.235529 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:32.236472 kubelet[2212]: I0124 00:42:32.235729 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:32.236472 kubelet[2212]: I0124 00:42:32.235807 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/706a3df22634a4f610883fdebba90315-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"706a3df22634a4f610883fdebba90315\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:32.236472 kubelet[2212]: I0124 00:42:32.235827 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:32.256687 kubelet[2212]: E0124 00:42:32.254634 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:32.259443 kubelet[2212]: E0124 00:42:32.259277 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:32.261114 systemd[1]: Created slice kubepods-burstable-pod706a3df22634a4f610883fdebba90315.slice - libcontainer container kubepods-burstable-pod706a3df22634a4f610883fdebba90315.slice. Jan 24 00:42:32.262433 containerd[1467]: time="2026-01-24T00:42:32.262252475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 24 00:42:32.266561 kubelet[2212]: E0124 00:42:32.266270 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:32.268206 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 24 00:42:32.273132 kubelet[2212]: E0124 00:42:32.272803 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:32.407112 kubelet[2212]: W0124 00:42:32.406618 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:32.407112 kubelet[2212]: E0124 00:42:32.406794 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:32.569768 kubelet[2212]: E0124 00:42:32.569546 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:32.572688 containerd[1467]: time="2026-01-24T00:42:32.572629786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:706a3df22634a4f610883fdebba90315,Namespace:kube-system,Attempt:0,}" Jan 24 00:42:32.574877 kubelet[2212]: E0124 00:42:32.574713 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:32.575876 containerd[1467]: time="2026-01-24T00:42:32.575850877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 24 00:42:32.629181 kubelet[2212]: I0124 00:42:32.629063 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:32.630090 kubelet[2212]: E0124 00:42:32.629830 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 24 00:42:32.829273 kubelet[2212]: W0124 00:42:32.828865 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:32.829273 kubelet[2212]: E0124 00:42:32.829161 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:32.869799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974473767.mount: Deactivated successfully. Jan 24 00:42:32.886454 containerd[1467]: time="2026-01-24T00:42:32.886269379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:42:32.892173 containerd[1467]: time="2026-01-24T00:42:32.892017184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:42:32.893758 containerd[1467]: time="2026-01-24T00:42:32.893650236Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:42:32.896392 containerd[1467]: time="2026-01-24T00:42:32.896111180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:42:32.898470 containerd[1467]: time="2026-01-24T00:42:32.898304569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:42:32.900775 containerd[1467]: time="2026-01-24T00:42:32.900663743Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:42:32.902364 containerd[1467]: time="2026-01-24T00:42:32.902206769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:42:32.913498 containerd[1467]: time="2026-01-24T00:42:32.913347026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:42:32.921215 containerd[1467]: time="2026-01-24T00:42:32.920824114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 344.6694ms" Jan 24 00:42:32.923497 containerd[1467]: time="2026-01-24T00:42:32.923405320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.035326ms" Jan 24 00:42:32.929596 containerd[1467]: time="2026-01-24T00:42:32.929498546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 356.786788ms" Jan 24 00:42:33.026523 kubelet[2212]: E0124 00:42:33.026352 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Jan 24 00:42:33.182737 kubelet[2212]: W0124 00:42:33.173767 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:33.182737 kubelet[2212]: E0124 00:42:33.174225 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:33.272203 kubelet[2212]: W0124 00:42:33.272009 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:33.272203 kubelet[2212]: E0124 00:42:33.272113 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:33.435671 kubelet[2212]: I0124 00:42:33.435369 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:33.437422 kubelet[2212]: E0124 00:42:33.437364 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 24 00:42:33.548271 kubelet[2212]: E0124 00:42:33.548102 2212 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:34.046077 containerd[1467]: time="2026-01-24T00:42:34.045130480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:42:34.046077 containerd[1467]: time="2026-01-24T00:42:34.045622487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:42:34.046077 containerd[1467]: time="2026-01-24T00:42:34.045770533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:34.064800 containerd[1467]: time="2026-01-24T00:42:34.064588341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:34.473314 systemd[1]: Started cri-containerd-8135fc05ba3ff2bc0792cba3bd65796a111823b1ce5de965ad59b6f288d0874b.scope - libcontainer container 8135fc05ba3ff2bc0792cba3bd65796a111823b1ce5de965ad59b6f288d0874b. Jan 24 00:42:34.482184 containerd[1467]: time="2026-01-24T00:42:34.473712890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:42:34.482184 containerd[1467]: time="2026-01-24T00:42:34.480167128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:42:34.482184 containerd[1467]: time="2026-01-24T00:42:34.480198326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:34.492835 containerd[1467]: time="2026-01-24T00:42:34.491733425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:34.496324 containerd[1467]: time="2026-01-24T00:42:34.493636281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:42:34.496324 containerd[1467]: time="2026-01-24T00:42:34.495079400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:42:34.496324 containerd[1467]: time="2026-01-24T00:42:34.495095631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:34.496324 containerd[1467]: time="2026-01-24T00:42:34.495444231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:34.741728 kubelet[2212]: E0124 00:42:34.740723 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="3.2s" Jan 24 00:42:34.790438 systemd[1]: Started cri-containerd-4f33161ebe33434eacb49e96cc982c2f8aa466e0eee47175e1c49eb3dbe1385b.scope - libcontainer container 4f33161ebe33434eacb49e96cc982c2f8aa466e0eee47175e1c49eb3dbe1385b. Jan 24 00:42:34.839790 systemd[1]: Started cri-containerd-7d06d9e0df712e17cbd1095636bb90c3ad4be9c3868811a22a17bb77c970c012.scope - libcontainer container 7d06d9e0df712e17cbd1095636bb90c3ad4be9c3868811a22a17bb77c970c012. Jan 24 00:42:35.118483 kubelet[2212]: W0124 00:42:35.118383 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:35.170286 kubelet[2212]: E0124 00:42:35.118821 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:35.226764 kubelet[2212]: I0124 00:42:35.226733 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:35.229742 kubelet[2212]: E0124 00:42:35.229717 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 24 00:42:35.307105 kubelet[2212]: W0124 00:42:35.306884 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:35.307610 kubelet[2212]: E0124 00:42:35.307512 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:35.376514 containerd[1467]: time="2026-01-24T00:42:35.375691833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8135fc05ba3ff2bc0792cba3bd65796a111823b1ce5de965ad59b6f288d0874b\"" Jan 24 00:42:35.381837 kubelet[2212]: E0124 00:42:35.381729 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:35.478572 containerd[1467]: time="2026-01-24T00:42:35.478218187Z" level=info msg="CreateContainer within sandbox \"8135fc05ba3ff2bc0792cba3bd65796a111823b1ce5de965ad59b6f288d0874b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:42:35.529125 containerd[1467]: time="2026-01-24T00:42:35.528693884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:706a3df22634a4f610883fdebba90315,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f33161ebe33434eacb49e96cc982c2f8aa466e0eee47175e1c49eb3dbe1385b\"" Jan 24 00:42:35.534860 kubelet[2212]: E0124 00:42:35.534787 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:35.538574 containerd[1467]: time="2026-01-24T00:42:35.538389190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d06d9e0df712e17cbd1095636bb90c3ad4be9c3868811a22a17bb77c970c012\"" Jan 24 00:42:35.540579 kubelet[2212]: W0124 00:42:35.539118 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:35.540579 kubelet[2212]: E0124 00:42:35.539169 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:35.542185 kubelet[2212]: E0124 00:42:35.542089 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:35.544654 containerd[1467]: time="2026-01-24T00:42:35.543634811Z" level=info msg="CreateContainer within sandbox \"4f33161ebe33434eacb49e96cc982c2f8aa466e0eee47175e1c49eb3dbe1385b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:42:35.547227 containerd[1467]: time="2026-01-24T00:42:35.547197752Z" level=info msg="CreateContainer within sandbox \"7d06d9e0df712e17cbd1095636bb90c3ad4be9c3868811a22a17bb77c970c012\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:42:35.579574 containerd[1467]: time="2026-01-24T00:42:35.579221320Z" level=info msg="CreateContainer within sandbox \"8135fc05ba3ff2bc0792cba3bd65796a111823b1ce5de965ad59b6f288d0874b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"39aba5a71507ba74c4fedb8207dfcb7a49b041c1f52ae52d95ac4002540e4cb1\"" Jan 24 00:42:35.581426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766208531.mount: Deactivated successfully. Jan 24 00:42:35.583642 containerd[1467]: time="2026-01-24T00:42:35.583511476Z" level=info msg="StartContainer for \"39aba5a71507ba74c4fedb8207dfcb7a49b041c1f52ae52d95ac4002540e4cb1\"" Jan 24 00:42:35.596285 containerd[1467]: time="2026-01-24T00:42:35.596158427Z" level=info msg="CreateContainer within sandbox \"4f33161ebe33434eacb49e96cc982c2f8aa466e0eee47175e1c49eb3dbe1385b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13f8f0324642fd67bde9f329826867827dd0e5c054f27fa7dbcc4e1a3791bd48\"" Jan 24 00:42:35.597607 containerd[1467]: time="2026-01-24T00:42:35.597444393Z" level=info msg="StartContainer for \"13f8f0324642fd67bde9f329826867827dd0e5c054f27fa7dbcc4e1a3791bd48\"" Jan 24 00:42:35.603526 containerd[1467]: time="2026-01-24T00:42:35.603308951Z" level=info msg="CreateContainer within sandbox \"7d06d9e0df712e17cbd1095636bb90c3ad4be9c3868811a22a17bb77c970c012\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3bc393fb3bf50a90aff9178e29b94f482f3d425df7cd1637c2bc4ab168113a4\"" Jan 24 00:42:35.607316 containerd[1467]: time="2026-01-24T00:42:35.607221553Z" level=info msg="StartContainer for \"e3bc393fb3bf50a90aff9178e29b94f482f3d425df7cd1637c2bc4ab168113a4\"" Jan 24 00:42:35.691178 systemd[1]: Started cri-containerd-39aba5a71507ba74c4fedb8207dfcb7a49b041c1f52ae52d95ac4002540e4cb1.scope - libcontainer container 39aba5a71507ba74c4fedb8207dfcb7a49b041c1f52ae52d95ac4002540e4cb1. Jan 24 00:42:35.808813 systemd[1]: Started cri-containerd-13f8f0324642fd67bde9f329826867827dd0e5c054f27fa7dbcc4e1a3791bd48.scope - libcontainer container 13f8f0324642fd67bde9f329826867827dd0e5c054f27fa7dbcc4e1a3791bd48. Jan 24 00:42:35.961095 kubelet[2212]: W0124 00:42:35.960612 2212 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.28:6443: connect: connection refused Jan 24 00:42:35.961095 kubelet[2212]: E0124 00:42:35.960659 2212 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:42:35.979099 systemd[1]: Started cri-containerd-e3bc393fb3bf50a90aff9178e29b94f482f3d425df7cd1637c2bc4ab168113a4.scope - libcontainer container e3bc393fb3bf50a90aff9178e29b94f482f3d425df7cd1637c2bc4ab168113a4. Jan 24 00:42:36.078006 containerd[1467]: time="2026-01-24T00:42:36.077818172Z" level=info msg="StartContainer for \"39aba5a71507ba74c4fedb8207dfcb7a49b041c1f52ae52d95ac4002540e4cb1\" returns successfully" Jan 24 00:42:36.216599 containerd[1467]: time="2026-01-24T00:42:36.211302928Z" level=info msg="StartContainer for \"13f8f0324642fd67bde9f329826867827dd0e5c054f27fa7dbcc4e1a3791bd48\" returns successfully" Jan 24 00:42:36.230506 containerd[1467]: time="2026-01-24T00:42:36.230404335Z" level=info msg="StartContainer for \"e3bc393fb3bf50a90aff9178e29b94f482f3d425df7cd1637c2bc4ab168113a4\" returns successfully" Jan 24 00:42:36.243731 kubelet[2212]: E0124 00:42:36.243634 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:36.244004 kubelet[2212]: E0124 00:42:36.243811 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:36.281078 kubelet[2212]: E0124 00:42:36.280164 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:36.281078 kubelet[2212]: E0124 00:42:36.280373 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:36.281078 kubelet[2212]: E0124 00:42:36.280873 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:36.281578 kubelet[2212]: E0124 00:42:36.281525 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:37.283373 kubelet[2212]: E0124 00:42:37.283337 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:37.284788 kubelet[2212]: E0124 00:42:37.284196 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:37.285434 kubelet[2212]: E0124 00:42:37.285283 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:37.285674 kubelet[2212]: E0124 00:42:37.285652 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:38.285230 kubelet[2212]: E0124 00:42:38.285093 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:38.286076 kubelet[2212]: E0124 00:42:38.285268 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:38.286076 kubelet[2212]: E0124 00:42:38.285491 2212 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:42:38.286076 kubelet[2212]: E0124 00:42:38.285579 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:38.432669 kubelet[2212]: I0124 00:42:38.432555 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:40.629076 kubelet[2212]: E0124 00:42:40.629014 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:42:40.778171 kubelet[2212]: I0124 00:42:40.776293 2212 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:42:40.795631 kubelet[2212]: E0124 00:42:40.786121 2212 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d83fd0f1861d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:42:31.529693649 +0000 UTC m=+1.296558964,LastTimestamp:2026-01-24 00:42:31.529693649 +0000 UTC m=+1.296558964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:42:40.806102 kubelet[2212]: I0124 00:42:40.804380 2212 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:40.842402 kubelet[2212]: E0124 00:42:40.842254 2212 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:40.842402 kubelet[2212]: I0124 00:42:40.842341 2212 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:40.848357 kubelet[2212]: E0124 00:42:40.848320 2212 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:40.849319 kubelet[2212]: I0124 00:42:40.848730 2212 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:40.852167 kubelet[2212]: E0124 00:42:40.852140 2212 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:42.312562 kubelet[2212]: I0124 00:42:42.312516 2212 apiserver.go:52] "Watching apiserver" Jan 24 00:42:42.412463 kubelet[2212]: I0124 00:42:42.412429 2212 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:42:45.966112 systemd[1]: Reloading requested from client PID 2496 ('systemctl') (unit session-9.scope)... Jan 24 00:42:45.966172 systemd[1]: Reloading... Jan 24 00:42:46.211378 kubelet[2212]: I0124 00:42:46.210224 2212 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:46.430773 kubelet[2212]: E0124 00:42:46.430649 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:46.434071 zram_generator::config[2535]: No configuration found. Jan 24 00:42:46.444275 kubelet[2212]: E0124 00:42:46.443241 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:46.492594 kubelet[2212]: I0124 00:42:46.492190 2212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.492067987 podStartE2EDuration="492.067987ms" podCreationTimestamp="2026-01-24 00:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:42:46.491885007 +0000 UTC m=+16.258750311" watchObservedRunningTime="2026-01-24 00:42:46.492067987 +0000 UTC m=+16.258933212" Jan 24 00:42:46.691817 kubelet[2212]: I0124 00:42:46.691611 2212 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:46.715645 kubelet[2212]: E0124 00:42:46.715510 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:46.839075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:42:47.106883 systemd[1]: Reloading finished in 1139 ms. Jan 24 00:42:47.198640 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:47.212433 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:42:47.213402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:47.213579 systemd[1]: kubelet.service: Consumed 7.148s CPU time, 135.7M memory peak, 0B memory swap peak. Jan 24 00:42:47.228088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:42:47.806469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:42:47.815519 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:42:47.989000 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:42:47.989000 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:42:47.989000 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:42:47.989000 kubelet[2580]: I0124 00:42:47.987677 2580 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:42:48.011103 kubelet[2580]: I0124 00:42:48.010858 2580 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:42:48.011440 kubelet[2580]: I0124 00:42:48.011139 2580 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:42:48.011738 kubelet[2580]: I0124 00:42:48.011641 2580 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:42:48.014218 kubelet[2580]: I0124 00:42:48.014103 2580 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:42:48.023492 kubelet[2580]: I0124 00:42:48.022511 2580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:42:48.106395 kubelet[2580]: E0124 00:42:48.105448 2580 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:42:48.106395 kubelet[2580]: I0124 00:42:48.105638 2580 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:42:48.125672 kubelet[2580]: I0124 00:42:48.123864 2580 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:42:48.134785 kubelet[2580]: I0124 00:42:48.133443 2580 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:42:48.134785 kubelet[2580]: I0124 00:42:48.133605 2580 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:42:48.134785 kubelet[2580]: I0124 00:42:48.134323 2580 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:42:48.134785 kubelet[2580]: I0124 00:42:48.134339 2580 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:42:48.135781 kubelet[2580]: I0124 00:42:48.134511 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:42:48.135781 kubelet[2580]: I0124 00:42:48.135221 2580 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:42:48.135781 kubelet[2580]: I0124 00:42:48.135262 2580 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:42:48.135781 kubelet[2580]: I0124 00:42:48.135472 2580 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:42:48.137492 kubelet[2580]: I0124 00:42:48.136262 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:42:48.147208 kubelet[2580]: I0124 00:42:48.147094 2580 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:42:48.148764 kubelet[2580]: I0124 00:42:48.148531 2580 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:42:48.170140 kubelet[2580]: I0124 00:42:48.169315 2580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:42:48.178964 kubelet[2580]: I0124 00:42:48.178063 2580 server.go:1287] "Started kubelet" Jan 24 00:42:48.184992 kubelet[2580]: I0124 00:42:48.183811 2580 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:42:48.185818 kubelet[2580]: I0124 00:42:48.185491 2580 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:42:48.186587 kubelet[2580]: I0124 00:42:48.186400 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:42:48.188246 kubelet[2580]: I0124 00:42:48.188148 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:42:48.201880 kubelet[2580]: I0124 00:42:48.201721 2580 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:42:48.210195 kubelet[2580]: I0124 00:42:48.210008 2580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:42:48.285187 kubelet[2580]: I0124 00:42:48.283457 2580 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:42:48.289465 kubelet[2580]: E0124 00:42:48.289339 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:42:48.290838 kubelet[2580]: I0124 00:42:48.290603 2580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:42:48.291560 kubelet[2580]: I0124 00:42:48.291298 2580 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:42:48.301498 kubelet[2580]: I0124 00:42:48.301413 2580 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:42:48.304313 kubelet[2580]: I0124 00:42:48.302007 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:42:48.306167 kubelet[2580]: E0124 00:42:48.305610 2580 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:42:48.329847 kubelet[2580]: I0124 00:42:48.329056 2580 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:42:48.345452 kubelet[2580]: I0124 00:42:48.345124 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:42:48.380523 kubelet[2580]: I0124 00:42:48.380180 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:42:48.380523 kubelet[2580]: I0124 00:42:48.380375 2580 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:42:48.380523 kubelet[2580]: I0124 00:42:48.380454 2580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:42:48.380523 kubelet[2580]: I0124 00:42:48.380468 2580 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:42:48.381195 kubelet[2580]: E0124 00:42:48.380598 2580 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:42:48.481492 kubelet[2580]: E0124 00:42:48.480964 2580 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 00:42:48.579218 kubelet[2580]: I0124 00:42:48.579175 2580 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.579591 2580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.579620 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.580065 2580 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.580082 2580 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.580104 2580 policy_none.go:49] "None policy: Start" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.580117 2580 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.580131 2580 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:42:48.581963 kubelet[2580]: I0124 00:42:48.580336 2580 state_mem.go:75] "Updated machine memory state" Jan 24 00:42:48.608295 kubelet[2580]: I0124 00:42:48.608193 2580 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:42:48.615108 kubelet[2580]: I0124 00:42:48.614836 2580 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:42:48.616146 kubelet[2580]: I0124 00:42:48.614996 2580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:42:48.616146 kubelet[2580]: I0124 00:42:48.615792 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:42:48.628302 kubelet[2580]: E0124 00:42:48.628113 2580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:42:48.696643 kubelet[2580]: I0124 00:42:48.692131 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:48.696643 kubelet[2580]: I0124 00:42:48.694649 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.699767 kubelet[2580]: I0124 00:42:48.699307 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:48.699767 kubelet[2580]: I0124 00:42:48.699342 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/706a3df22634a4f610883fdebba90315-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"706a3df22634a4f610883fdebba90315\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:48.699767 kubelet[2580]: I0124 00:42:48.699420 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/706a3df22634a4f610883fdebba90315-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"706a3df22634a4f610883fdebba90315\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:48.699767 kubelet[2580]: I0124 00:42:48.699457 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/706a3df22634a4f610883fdebba90315-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"706a3df22634a4f610883fdebba90315\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:48.720207 kubelet[2580]: E0124 00:42:48.720079 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:48.720875 kubelet[2580]: E0124 00:42:48.720769 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.800835 kubelet[2580]: I0124 00:42:48.799804 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.800835 kubelet[2580]: I0124 00:42:48.799842 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:42:48.800835 kubelet[2580]: I0124 00:42:48.799877 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.800835 kubelet[2580]: I0124 00:42:48.799973 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.800835 kubelet[2580]: I0124 00:42:48.799991 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.806434 kubelet[2580]: I0124 00:42:48.800006 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:42:48.806434 kubelet[2580]: I0124 00:42:48.803626 2580 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:42:48.876769 kubelet[2580]: I0124 00:42:48.876567 2580 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:42:48.879131 kubelet[2580]: I0124 00:42:48.879094 2580 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:42:49.020098 kubelet[2580]: E0124 00:42:49.019007 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:49.027561 kubelet[2580]: E0124 00:42:49.027513 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:49.035971 kubelet[2580]: E0124 00:42:49.032417 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:49.147010 kubelet[2580]: I0124 00:42:49.145705 2580 apiserver.go:52] "Watching apiserver" Jan 24 00:42:49.194346 kubelet[2580]: I0124 00:42:49.193490 2580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:42:49.437440 kubelet[2580]: I0124 00:42:49.437277 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:49.437663 kubelet[2580]: E0124 00:42:49.437515 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:49.440009 kubelet[2580]: E0124 00:42:49.438450 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:49.504111 kubelet[2580]: I0124 00:42:49.503666 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.498564349 podStartE2EDuration="3.498564349s" podCreationTimestamp="2026-01-24 00:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:42:49.495480691 +0000 UTC m=+1.669682554" watchObservedRunningTime="2026-01-24 00:42:49.498564349 +0000 UTC m=+1.672766202" Jan 24 00:42:49.519825 kubelet[2580]: E0124 00:42:49.517223 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:42:49.524199 kubelet[2580]: E0124 00:42:49.523877 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:49.613262 kubelet[2580]: I0124 00:42:49.612683 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.612655744 podStartE2EDuration="1.612655744s" podCreationTimestamp="2026-01-24 00:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:42:49.588212471 +0000 UTC m=+1.762414364" watchObservedRunningTime="2026-01-24 00:42:49.612655744 +0000 UTC m=+1.786857597" Jan 24 00:42:50.394014 kubelet[2580]: I0124 00:42:50.392011 2580 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:42:50.394772 containerd[1467]: time="2026-01-24T00:42:50.394514511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:42:50.395372 kubelet[2580]: I0124 00:42:50.394768 2580 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:42:50.479868 kubelet[2580]: E0124 00:42:50.479777 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:50.485456 kubelet[2580]: E0124 00:42:50.483231 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:51.291347 kubelet[2580]: I0124 00:42:51.263786 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0211861f-1847-411f-863f-484bbf8fb16a-xtables-lock\") pod \"kube-proxy-9rbm9\" (UID: \"0211861f-1847-411f-863f-484bbf8fb16a\") " pod="kube-system/kube-proxy-9rbm9" Jan 24 00:42:51.292010 kubelet[2580]: I0124 00:42:51.291483 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck54q\" (UniqueName: \"kubernetes.io/projected/0211861f-1847-411f-863f-484bbf8fb16a-kube-api-access-ck54q\") pod \"kube-proxy-9rbm9\" (UID: \"0211861f-1847-411f-863f-484bbf8fb16a\") " pod="kube-system/kube-proxy-9rbm9" Jan 24 00:42:51.292010 kubelet[2580]: I0124 00:42:51.291537 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0211861f-1847-411f-863f-484bbf8fb16a-kube-proxy\") pod \"kube-proxy-9rbm9\" (UID: \"0211861f-1847-411f-863f-484bbf8fb16a\") " pod="kube-system/kube-proxy-9rbm9" Jan 24 00:42:51.292010 kubelet[2580]: I0124 00:42:51.291616 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0211861f-1847-411f-863f-484bbf8fb16a-lib-modules\") pod \"kube-proxy-9rbm9\" (UID: \"0211861f-1847-411f-863f-484bbf8fb16a\") " pod="kube-system/kube-proxy-9rbm9" Jan 24 00:42:51.325489 systemd[1]: Created slice kubepods-besteffort-pod0211861f_1847_411f_863f_484bbf8fb16a.slice - libcontainer container kubepods-besteffort-pod0211861f_1847_411f_863f_484bbf8fb16a.slice. Jan 24 00:42:51.344990 kubelet[2580]: W0124 00:42:51.344214 2580 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 24 00:42:51.348008 kubelet[2580]: E0124 00:42:51.345572 2580 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 24 00:42:51.348008 kubelet[2580]: I0124 00:42:51.345462 2580 status_manager.go:890] "Failed to get status for pod" podUID="0211861f-1847-411f-863f-484bbf8fb16a" pod="kube-system/kube-proxy-9rbm9" err="pods \"kube-proxy-9rbm9\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 24 00:42:51.348008 kubelet[2580]: W0124 00:42:51.345293 2580 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 24 00:42:51.348008 kubelet[2580]: E0124 00:42:51.345834 2580 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 24 00:42:51.486977 kubelet[2580]: E0124 00:42:51.485360 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:51.813598 kubelet[2580]: E0124 00:42:51.813451 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:52.174026 systemd[1]: Created slice kubepods-besteffort-podb2ec2bf2_ebe0_4ebb_9056_f550a03b5a44.slice - libcontainer container kubepods-besteffort-podb2ec2bf2_ebe0_4ebb_9056_f550a03b5a44.slice. Jan 24 00:42:52.206743 kubelet[2580]: I0124 00:42:52.206416 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2ec2bf2-ebe0-4ebb-9056-f550a03b5a44-var-lib-calico\") pod \"tigera-operator-7dcd859c48-x2jvz\" (UID: \"b2ec2bf2-ebe0-4ebb-9056-f550a03b5a44\") " pod="tigera-operator/tigera-operator-7dcd859c48-x2jvz" Jan 24 00:42:52.206743 kubelet[2580]: I0124 00:42:52.206521 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2mk\" (UniqueName: \"kubernetes.io/projected/b2ec2bf2-ebe0-4ebb-9056-f550a03b5a44-kube-api-access-sj2mk\") pod \"tigera-operator-7dcd859c48-x2jvz\" (UID: \"b2ec2bf2-ebe0-4ebb-9056-f550a03b5a44\") " pod="tigera-operator/tigera-operator-7dcd859c48-x2jvz" Jan 24 00:42:52.394274 kubelet[2580]: E0124 00:42:52.394165 2580 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:42:52.394638 kubelet[2580]: E0124 00:42:52.394461 2580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0211861f-1847-411f-863f-484bbf8fb16a-kube-proxy podName:0211861f-1847-411f-863f-484bbf8fb16a nodeName:}" failed. No retries permitted until 2026-01-24 00:42:52.894389834 +0000 UTC m=+5.068591687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0211861f-1847-411f-863f-484bbf8fb16a-kube-proxy") pod "kube-proxy-9rbm9" (UID: "0211861f-1847-411f-863f-484bbf8fb16a") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:42:52.411296 kubelet[2580]: E0124 00:42:52.411228 2580 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:42:52.411469 kubelet[2580]: E0124 00:42:52.411312 2580 projected.go:194] Error preparing data for projected volume kube-api-access-ck54q for pod kube-system/kube-proxy-9rbm9: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:42:52.411531 kubelet[2580]: E0124 00:42:52.411479 2580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0211861f-1847-411f-863f-484bbf8fb16a-kube-api-access-ck54q podName:0211861f-1847-411f-863f-484bbf8fb16a nodeName:}" failed. No retries permitted until 2026-01-24 00:42:52.911452585 +0000 UTC m=+5.085654438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ck54q" (UniqueName: "kubernetes.io/projected/0211861f-1847-411f-863f-484bbf8fb16a-kube-api-access-ck54q") pod "kube-proxy-9rbm9" (UID: "0211861f-1847-411f-863f-484bbf8fb16a") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:42:52.482749 containerd[1467]: time="2026-01-24T00:42:52.482457979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-x2jvz,Uid:b2ec2bf2-ebe0-4ebb-9056-f550a03b5a44,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:42:52.487743 kubelet[2580]: E0124 00:42:52.487140 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:52.610339 containerd[1467]: time="2026-01-24T00:42:52.609796745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:42:52.610339 containerd[1467]: time="2026-01-24T00:42:52.610186552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:42:52.610339 containerd[1467]: time="2026-01-24T00:42:52.610245102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:52.611641 containerd[1467]: time="2026-01-24T00:42:52.610694750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:52.670041 systemd[1]: Started cri-containerd-dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da.scope - libcontainer container dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da. Jan 24 00:42:52.797173 containerd[1467]: time="2026-01-24T00:42:52.794220433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-x2jvz,Uid:b2ec2bf2-ebe0-4ebb-9056-f550a03b5a44,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da\"" Jan 24 00:42:52.833636 containerd[1467]: time="2026-01-24T00:42:52.833475797Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:42:53.175974 kubelet[2580]: E0124 00:42:53.175733 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:53.192044 containerd[1467]: time="2026-01-24T00:42:53.191165872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rbm9,Uid:0211861f-1847-411f-863f-484bbf8fb16a,Namespace:kube-system,Attempt:0,}" Jan 24 00:42:53.248177 containerd[1467]: time="2026-01-24T00:42:53.245749757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:42:53.248177 containerd[1467]: time="2026-01-24T00:42:53.245811491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:42:53.248177 containerd[1467]: time="2026-01-24T00:42:53.245824916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:53.248177 containerd[1467]: time="2026-01-24T00:42:53.246126710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:42:53.305308 systemd[1]: Started cri-containerd-28b584aac957fc105286e42b116399dcaac234b8239c842ccf4e8ac6c960abad.scope - libcontainer container 28b584aac957fc105286e42b116399dcaac234b8239c842ccf4e8ac6c960abad. Jan 24 00:42:53.348205 containerd[1467]: time="2026-01-24T00:42:53.347594576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rbm9,Uid:0211861f-1847-411f-863f-484bbf8fb16a,Namespace:kube-system,Attempt:0,} returns sandbox id \"28b584aac957fc105286e42b116399dcaac234b8239c842ccf4e8ac6c960abad\"" Jan 24 00:42:53.349373 kubelet[2580]: E0124 00:42:53.349277 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:53.378259 containerd[1467]: time="2026-01-24T00:42:53.378183748Z" level=info msg="CreateContainer within sandbox \"28b584aac957fc105286e42b116399dcaac234b8239c842ccf4e8ac6c960abad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:42:53.473134 containerd[1467]: time="2026-01-24T00:42:53.472655120Z" level=info msg="CreateContainer within sandbox \"28b584aac957fc105286e42b116399dcaac234b8239c842ccf4e8ac6c960abad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"499cf5188ca22c40c9d8793b9fb185731cb7f5a7065f1cc33fe5ad380e3a7406\"" Jan 24 00:42:53.475837 containerd[1467]: time="2026-01-24T00:42:53.475744905Z" level=info msg="StartContainer for \"499cf5188ca22c40c9d8793b9fb185731cb7f5a7065f1cc33fe5ad380e3a7406\"" Jan 24 00:42:53.546294 systemd[1]: Started cri-containerd-499cf5188ca22c40c9d8793b9fb185731cb7f5a7065f1cc33fe5ad380e3a7406.scope - libcontainer container 499cf5188ca22c40c9d8793b9fb185731cb7f5a7065f1cc33fe5ad380e3a7406. Jan 24 00:42:53.618403 containerd[1467]: time="2026-01-24T00:42:53.618263274Z" level=info msg="StartContainer for \"499cf5188ca22c40c9d8793b9fb185731cb7f5a7065f1cc33fe5ad380e3a7406\" returns successfully" Jan 24 00:42:54.511678 kubelet[2580]: E0124 00:42:54.511626 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:54.851834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009003494.mount: Deactivated successfully. Jan 24 00:42:55.515797 kubelet[2580]: E0124 00:42:55.515689 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:55.536252 kubelet[2580]: E0124 00:42:55.535953 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:55.587588 kubelet[2580]: I0124 00:42:55.587474 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rbm9" podStartSLOduration=4.587445109 podStartE2EDuration="4.587445109s" podCreationTimestamp="2026-01-24 00:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:42:54.546165179 +0000 UTC m=+6.720367032" watchObservedRunningTime="2026-01-24 00:42:55.587445109 +0000 UTC m=+7.761646972" Jan 24 00:42:56.343166 containerd[1467]: time="2026-01-24T00:42:56.340607082Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:56.345451 containerd[1467]: time="2026-01-24T00:42:56.345339852Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:42:56.347717 containerd[1467]: time="2026-01-24T00:42:56.347536881Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:56.362302 containerd[1467]: time="2026-01-24T00:42:56.361842597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:42:56.364255 containerd[1467]: time="2026-01-24T00:42:56.363985141Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.530360035s" Jan 24 00:42:56.364255 containerd[1467]: time="2026-01-24T00:42:56.364069007Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:42:56.371305 containerd[1467]: time="2026-01-24T00:42:56.371054533Z" level=info msg="CreateContainer within sandbox \"dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:42:56.401332 containerd[1467]: time="2026-01-24T00:42:56.401163047Z" level=info msg="CreateContainer within sandbox \"dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d\"" Jan 24 00:42:56.402267 containerd[1467]: time="2026-01-24T00:42:56.401879803Z" level=info msg="StartContainer for \"32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d\"" Jan 24 00:42:56.489497 systemd[1]: Started cri-containerd-32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d.scope - libcontainer container 32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d. Jan 24 00:42:56.521997 kubelet[2580]: E0124 00:42:56.521775 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:56.592383 containerd[1467]: time="2026-01-24T00:42:56.592161886Z" level=info msg="StartContainer for \"32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d\" returns successfully" Jan 24 00:42:56.867840 kubelet[2580]: E0124 00:42:56.867390 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:57.527034 kubelet[2580]: E0124 00:42:57.526751 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:57.527640 kubelet[2580]: E0124 00:42:57.527602 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:42:58.529261 kubelet[2580]: E0124 00:42:58.529174 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:00.121845 systemd[1]: cri-containerd-32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d.scope: Deactivated successfully. Jan 24 00:43:00.191091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d-rootfs.mount: Deactivated successfully. Jan 24 00:43:00.346543 containerd[1467]: time="2026-01-24T00:43:00.346433061Z" level=info msg="shim disconnected" id=32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d namespace=k8s.io Jan 24 00:43:00.346543 containerd[1467]: time="2026-01-24T00:43:00.346529942Z" level=warning msg="cleaning up after shim disconnected" id=32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d namespace=k8s.io Jan 24 00:43:00.346543 containerd[1467]: time="2026-01-24T00:43:00.346543066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:43:00.395973 containerd[1467]: time="2026-01-24T00:43:00.393186141Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:43:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:43:00.539973 kubelet[2580]: I0124 00:43:00.538830 2580 scope.go:117] "RemoveContainer" containerID="32e23f2a998df28e9fb7398a6a1e0e88db45bc857e262327884519239915674d" Jan 24 00:43:00.557469 containerd[1467]: time="2026-01-24T00:43:00.556323061Z" level=info msg="CreateContainer within sandbox \"dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:43:00.584485 containerd[1467]: time="2026-01-24T00:43:00.584253791Z" level=info msg="CreateContainer within sandbox \"dd98d10c5cb13801b2eb23b813d53df5f1798c70ff0dbd1413272842668122da\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8ab9445f28f9a6def2c776356de6de99a6bf8265423a2a219e2bfb75038db953\"" Jan 24 00:43:00.587001 containerd[1467]: time="2026-01-24T00:43:00.586288778Z" level=info msg="StartContainer for \"8ab9445f28f9a6def2c776356de6de99a6bf8265423a2a219e2bfb75038db953\"" Jan 24 00:43:00.686402 systemd[1]: Started cri-containerd-8ab9445f28f9a6def2c776356de6de99a6bf8265423a2a219e2bfb75038db953.scope - libcontainer container 8ab9445f28f9a6def2c776356de6de99a6bf8265423a2a219e2bfb75038db953. Jan 24 00:43:00.768740 containerd[1467]: time="2026-01-24T00:43:00.767648731Z" level=info msg="StartContainer for \"8ab9445f28f9a6def2c776356de6de99a6bf8265423a2a219e2bfb75038db953\" returns successfully" Jan 24 00:43:01.594338 kubelet[2580]: I0124 00:43:01.593095 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-x2jvz" podStartSLOduration=7.052591784 podStartE2EDuration="10.593000326s" podCreationTimestamp="2026-01-24 00:42:51 +0000 UTC" firstStartedPulling="2026-01-24 00:42:52.827729437 +0000 UTC m=+5.001931290" lastFinishedPulling="2026-01-24 00:42:56.368137979 +0000 UTC m=+8.542339832" observedRunningTime="2026-01-24 00:42:57.547509167 +0000 UTC m=+9.721711039" watchObservedRunningTime="2026-01-24 00:43:01.593000326 +0000 UTC m=+13.767202189" Jan 24 00:43:04.537432 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 24 00:43:04.547708 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 24 00:43:04.567832 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:45968.service: Deactivated successfully. Jan 24 00:43:04.579404 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:43:04.579869 systemd[1]: session-9.scope: Consumed 10.204s CPU time, 159.8M memory peak, 0B memory swap peak. Jan 24 00:43:04.583785 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:43:04.587702 systemd-logind[1447]: Removed session 9. Jan 24 00:43:11.501205 systemd[1]: Created slice kubepods-besteffort-pod71276e6c_f8e9_45b6_bbff_df0f04ee24c1.slice - libcontainer container kubepods-besteffort-pod71276e6c_f8e9_45b6_bbff_df0f04ee24c1.slice. Jan 24 00:43:11.536724 kubelet[2580]: I0124 00:43:11.532879 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71276e6c-f8e9-45b6-bbff-df0f04ee24c1-tigera-ca-bundle\") pod \"calico-typha-78647dbc74-22zhm\" (UID: \"71276e6c-f8e9-45b6-bbff-df0f04ee24c1\") " pod="calico-system/calico-typha-78647dbc74-22zhm" Jan 24 00:43:11.536724 kubelet[2580]: I0124 00:43:11.533025 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzrg\" (UniqueName: \"kubernetes.io/projected/71276e6c-f8e9-45b6-bbff-df0f04ee24c1-kube-api-access-xhzrg\") pod \"calico-typha-78647dbc74-22zhm\" (UID: \"71276e6c-f8e9-45b6-bbff-df0f04ee24c1\") " pod="calico-system/calico-typha-78647dbc74-22zhm" Jan 24 00:43:11.536724 kubelet[2580]: I0124 00:43:11.533052 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71276e6c-f8e9-45b6-bbff-df0f04ee24c1-typha-certs\") pod \"calico-typha-78647dbc74-22zhm\" (UID: \"71276e6c-f8e9-45b6-bbff-df0f04ee24c1\") " pod="calico-system/calico-typha-78647dbc74-22zhm" Jan 24 00:43:11.725067 systemd[1]: Created slice kubepods-besteffort-podd56083c5_83be_41f9_94e7_886b8a9c1787.slice - libcontainer container kubepods-besteffort-podd56083c5_83be_41f9_94e7_886b8a9c1787.slice. Jan 24 00:43:11.834138 kubelet[2580]: I0124 00:43:11.833819 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d56083c5-83be-41f9-94e7-886b8a9c1787-tigera-ca-bundle\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834138 kubelet[2580]: I0124 00:43:11.833978 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d56083c5-83be-41f9-94e7-886b8a9c1787-node-certs\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834138 kubelet[2580]: I0124 00:43:11.834034 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-cni-net-dir\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834138 kubelet[2580]: I0124 00:43:11.834059 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-flexvol-driver-host\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834138 kubelet[2580]: I0124 00:43:11.834160 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-cni-log-dir\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834603 kubelet[2580]: I0124 00:43:11.834184 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-lib-modules\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834603 kubelet[2580]: I0124 00:43:11.834213 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-var-run-calico\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834603 kubelet[2580]: I0124 00:43:11.834236 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-policysync\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834603 kubelet[2580]: I0124 00:43:11.834258 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-cni-bin-dir\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834603 kubelet[2580]: I0124 00:43:11.834280 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-var-lib-calico\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834707 kubelet[2580]: I0124 00:43:11.834312 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d56083c5-83be-41f9-94e7-886b8a9c1787-xtables-lock\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.834707 kubelet[2580]: I0124 00:43:11.834334 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t95rz\" (UniqueName: \"kubernetes.io/projected/d56083c5-83be-41f9-94e7-886b8a9c1787-kube-api-access-t95rz\") pod \"calico-node-kfd6z\" (UID: \"d56083c5-83be-41f9-94e7-886b8a9c1787\") " pod="calico-system/calico-node-kfd6z" Jan 24 00:43:11.849592 kubelet[2580]: E0124 00:43:11.849270 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:11.875524 containerd[1467]: time="2026-01-24T00:43:11.875317498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78647dbc74-22zhm,Uid:71276e6c-f8e9-45b6-bbff-df0f04ee24c1,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:11.910308 kubelet[2580]: E0124 00:43:11.909810 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:11.947015 kubelet[2580]: E0124 00:43:11.946043 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.947015 kubelet[2580]: W0124 00:43:11.946275 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.948030 kubelet[2580]: E0124 00:43:11.947721 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.956975 kubelet[2580]: E0124 00:43:11.950284 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.956975 kubelet[2580]: W0124 00:43:11.950486 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.956975 kubelet[2580]: E0124 00:43:11.950659 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.956975 kubelet[2580]: E0124 00:43:11.955633 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.956975 kubelet[2580]: W0124 00:43:11.955653 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.956975 kubelet[2580]: E0124 00:43:11.955677 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.958811 kubelet[2580]: E0124 00:43:11.957741 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.958811 kubelet[2580]: W0124 00:43:11.957869 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.958811 kubelet[2580]: E0124 00:43:11.958157 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.967696 kubelet[2580]: E0124 00:43:11.962848 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.968557 kubelet[2580]: W0124 00:43:11.968279 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.977615 kubelet[2580]: E0124 00:43:11.977440 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.983666 kubelet[2580]: E0124 00:43:11.983298 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.983809 kubelet[2580]: W0124 00:43:11.983614 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.984125 kubelet[2580]: E0124 00:43:11.983857 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.985335 kubelet[2580]: E0124 00:43:11.985045 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.985335 kubelet[2580]: W0124 00:43:11.985302 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.986062 kubelet[2580]: E0124 00:43:11.985805 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.989342 kubelet[2580]: E0124 00:43:11.988652 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.989342 kubelet[2580]: W0124 00:43:11.988676 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.989342 kubelet[2580]: E0124 00:43:11.988989 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.989758 kubelet[2580]: E0124 00:43:11.989544 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.989758 kubelet[2580]: W0124 00:43:11.989565 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.989758 kubelet[2580]: E0124 00:43:11.989582 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.990733 kubelet[2580]: E0124 00:43:11.990448 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.990733 kubelet[2580]: W0124 00:43:11.990465 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.990733 kubelet[2580]: E0124 00:43:11.990481 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.992007 kubelet[2580]: E0124 00:43:11.991989 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.992239 kubelet[2580]: W0124 00:43:11.992153 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.992239 kubelet[2580]: E0124 00:43:11.992181 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.993157 kubelet[2580]: E0124 00:43:11.992884 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.993157 kubelet[2580]: W0124 00:43:11.993022 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.993157 kubelet[2580]: E0124 00:43:11.993038 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.993304 containerd[1467]: time="2026-01-24T00:43:11.992416357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:11.993823 containerd[1467]: time="2026-01-24T00:43:11.993366705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:11.993823 containerd[1467]: time="2026-01-24T00:43:11.993778837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:11.994152 kubelet[2580]: E0124 00:43:11.994137 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.994253 kubelet[2580]: W0124 00:43:11.994155 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.994253 kubelet[2580]: E0124 00:43:11.994168 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.994643 kubelet[2580]: E0124 00:43:11.994468 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.994643 kubelet[2580]: W0124 00:43:11.994539 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.994643 kubelet[2580]: E0124 00:43:11.994555 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.995050 kubelet[2580]: E0124 00:43:11.994843 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.995050 kubelet[2580]: W0124 00:43:11.995024 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.995050 kubelet[2580]: E0124 00:43:11.995039 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.995460 containerd[1467]: time="2026-01-24T00:43:11.995281196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:11.995526 kubelet[2580]: E0124 00:43:11.995373 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.995526 kubelet[2580]: W0124 00:43:11.995387 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.995526 kubelet[2580]: E0124 00:43:11.995401 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.995662 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.997178 kubelet[2580]: W0124 00:43:11.995676 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.995688 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.996203 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.997178 kubelet[2580]: W0124 00:43:11.996214 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.996226 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.996477 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.997178 kubelet[2580]: W0124 00:43:11.996487 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.996498 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.997178 kubelet[2580]: E0124 00:43:11.996734 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.998023 kubelet[2580]: W0124 00:43:11.996744 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.998023 kubelet[2580]: E0124 00:43:11.996755 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.998023 kubelet[2580]: E0124 00:43:11.997154 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.998023 kubelet[2580]: W0124 00:43:11.997165 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.998023 kubelet[2580]: E0124 00:43:11.997179 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:11.999569 kubelet[2580]: E0124 00:43:11.999458 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:11.999569 kubelet[2580]: W0124 00:43:11.999512 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:11.999569 kubelet[2580]: E0124 00:43:11.999541 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.012282 kubelet[2580]: E0124 00:43:12.012017 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.012282 kubelet[2580]: W0124 00:43:12.012157 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.013774 kubelet[2580]: E0124 00:43:12.013689 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.037475 kubelet[2580]: E0124 00:43:12.037393 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.037475 kubelet[2580]: W0124 00:43:12.037471 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.037718 kubelet[2580]: E0124 00:43:12.037507 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.039992 kubelet[2580]: I0124 00:43:12.038014 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3935770d-1f88-434a-a13a-250f66f25ebf-registration-dir\") pod \"csi-node-driver-xzhgv\" (UID: \"3935770d-1f88-434a-a13a-250f66f25ebf\") " pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:12.040691 kubelet[2580]: E0124 00:43:12.040507 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.040691 kubelet[2580]: W0124 00:43:12.040666 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.040796 kubelet[2580]: E0124 00:43:12.040743 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.041207 kubelet[2580]: E0124 00:43:12.040873 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:12.043484 kubelet[2580]: E0124 00:43:12.043453 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.044004 kubelet[2580]: W0124 00:43:12.043678 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.044287 kubelet[2580]: E0124 00:43:12.044263 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.046396 kubelet[2580]: E0124 00:43:12.046381 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.046572 kubelet[2580]: W0124 00:43:12.046453 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.046572 kubelet[2580]: E0124 00:43:12.046470 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.046572 kubelet[2580]: I0124 00:43:12.046502 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3935770d-1f88-434a-a13a-250f66f25ebf-socket-dir\") pod \"csi-node-driver-xzhgv\" (UID: \"3935770d-1f88-434a-a13a-250f66f25ebf\") " pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:12.047669 containerd[1467]: time="2026-01-24T00:43:12.047572276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kfd6z,Uid:d56083c5-83be-41f9-94e7-886b8a9c1787,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:12.051193 kubelet[2580]: E0124 00:43:12.051158 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.051315 kubelet[2580]: W0124 00:43:12.051295 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.052722 kubelet[2580]: E0124 00:43:12.052237 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.052722 kubelet[2580]: I0124 00:43:12.052387 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3935770d-1f88-434a-a13a-250f66f25ebf-varrun\") pod \"csi-node-driver-xzhgv\" (UID: \"3935770d-1f88-434a-a13a-250f66f25ebf\") " pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:12.053475 kubelet[2580]: E0124 00:43:12.053288 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.053475 kubelet[2580]: W0124 00:43:12.053306 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.053475 kubelet[2580]: E0124 00:43:12.053321 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.053475 kubelet[2580]: I0124 00:43:12.053345 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs859\" (UniqueName: \"kubernetes.io/projected/3935770d-1f88-434a-a13a-250f66f25ebf-kube-api-access-gs859\") pod \"csi-node-driver-xzhgv\" (UID: \"3935770d-1f88-434a-a13a-250f66f25ebf\") " pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:12.054854 kubelet[2580]: E0124 00:43:12.054836 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.055499 kubelet[2580]: W0124 00:43:12.055284 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.055499 kubelet[2580]: E0124 00:43:12.055319 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.055499 kubelet[2580]: I0124 00:43:12.055343 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3935770d-1f88-434a-a13a-250f66f25ebf-kubelet-dir\") pod \"csi-node-driver-xzhgv\" (UID: \"3935770d-1f88-434a-a13a-250f66f25ebf\") " pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:12.055827 kubelet[2580]: E0124 00:43:12.055810 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.057294 kubelet[2580]: W0124 00:43:12.056785 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.057294 kubelet[2580]: E0124 00:43:12.057059 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.059200 kubelet[2580]: E0124 00:43:12.058421 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.059200 kubelet[2580]: W0124 00:43:12.058567 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.059200 kubelet[2580]: E0124 00:43:12.058760 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.060476 kubelet[2580]: E0124 00:43:12.060392 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.060476 kubelet[2580]: W0124 00:43:12.060467 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.061327 kubelet[2580]: E0124 00:43:12.061227 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.062159 kubelet[2580]: E0124 00:43:12.062001 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.062555 kubelet[2580]: W0124 00:43:12.062257 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.062555 kubelet[2580]: E0124 00:43:12.062476 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.063633 systemd[1]: Started cri-containerd-7b677af1e0462bfdb0d2eb88a97ba35073082b8685f3fdd4989b15497b4d5d8c.scope - libcontainer container 7b677af1e0462bfdb0d2eb88a97ba35073082b8685f3fdd4989b15497b4d5d8c. Jan 24 00:43:12.067421 kubelet[2580]: E0124 00:43:12.065247 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.067421 kubelet[2580]: W0124 00:43:12.065265 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.067421 kubelet[2580]: E0124 00:43:12.065292 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.067421 kubelet[2580]: E0124 00:43:12.067065 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.067421 kubelet[2580]: W0124 00:43:12.067176 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.067421 kubelet[2580]: E0124 00:43:12.067205 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.069234 kubelet[2580]: E0124 00:43:12.069055 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.069234 kubelet[2580]: W0124 00:43:12.069172 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.069234 kubelet[2580]: E0124 00:43:12.069204 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.070392 kubelet[2580]: E0124 00:43:12.070025 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.070392 kubelet[2580]: W0124 00:43:12.070311 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.070392 kubelet[2580]: E0124 00:43:12.070325 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.137777 containerd[1467]: time="2026-01-24T00:43:12.134868296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:12.137777 containerd[1467]: time="2026-01-24T00:43:12.135043000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:12.137777 containerd[1467]: time="2026-01-24T00:43:12.135061825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:12.137777 containerd[1467]: time="2026-01-24T00:43:12.135227612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:12.157633 kubelet[2580]: E0124 00:43:12.157387 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.157633 kubelet[2580]: W0124 00:43:12.157431 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.157633 kubelet[2580]: E0124 00:43:12.157470 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.167867 kubelet[2580]: E0124 00:43:12.167509 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.167867 kubelet[2580]: W0124 00:43:12.167541 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.167867 kubelet[2580]: E0124 00:43:12.167582 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.168729 kubelet[2580]: E0124 00:43:12.168502 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.168729 kubelet[2580]: W0124 00:43:12.168519 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.168729 kubelet[2580]: E0124 00:43:12.168634 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.169320 kubelet[2580]: E0124 00:43:12.169178 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.169320 kubelet[2580]: W0124 00:43:12.169193 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.170043 kubelet[2580]: E0124 00:43:12.169741 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.170154 kubelet[2580]: E0124 00:43:12.170064 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.170154 kubelet[2580]: W0124 00:43:12.170126 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.171687 kubelet[2580]: E0124 00:43:12.170968 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.171687 kubelet[2580]: E0124 00:43:12.171050 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.171687 kubelet[2580]: W0124 00:43:12.171057 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.173597 kubelet[2580]: E0124 00:43:12.172169 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.173597 kubelet[2580]: E0124 00:43:12.172298 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.173597 kubelet[2580]: W0124 00:43:12.172307 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.173597 kubelet[2580]: E0124 00:43:12.172461 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.174609 kubelet[2580]: E0124 00:43:12.174457 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.175210 kubelet[2580]: W0124 00:43:12.174639 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.175460 kubelet[2580]: E0124 00:43:12.175374 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.176309 kubelet[2580]: E0124 00:43:12.176245 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.176309 kubelet[2580]: W0124 00:43:12.176299 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.176645 kubelet[2580]: E0124 00:43:12.176464 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.176989 kubelet[2580]: E0124 00:43:12.176838 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.177146 kubelet[2580]: W0124 00:43:12.177040 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.177396 kubelet[2580]: E0124 00:43:12.177266 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.177543 kubelet[2580]: E0124 00:43:12.177493 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.177543 kubelet[2580]: W0124 00:43:12.177539 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.177746 kubelet[2580]: E0124 00:43:12.177696 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.178349 kubelet[2580]: E0124 00:43:12.178303 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.178394 kubelet[2580]: W0124 00:43:12.178353 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.178558 kubelet[2580]: E0124 00:43:12.178507 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.179225 systemd[1]: Started cri-containerd-3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1.scope - libcontainer container 3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1. Jan 24 00:43:12.179763 kubelet[2580]: E0124 00:43:12.179628 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.179763 kubelet[2580]: W0124 00:43:12.179644 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.181862 kubelet[2580]: E0124 00:43:12.180174 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.181862 kubelet[2580]: E0124 00:43:12.180713 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.181862 kubelet[2580]: W0124 00:43:12.180725 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.181862 kubelet[2580]: E0124 00:43:12.181552 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.183739 kubelet[2580]: E0124 00:43:12.183647 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.183739 kubelet[2580]: W0124 00:43:12.183707 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.184213 kubelet[2580]: E0124 00:43:12.184066 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.185157 kubelet[2580]: E0124 00:43:12.185025 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.185157 kubelet[2580]: W0124 00:43:12.185130 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.185605 kubelet[2580]: E0124 00:43:12.185392 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.186004 kubelet[2580]: E0124 00:43:12.185882 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.186229 kubelet[2580]: W0124 00:43:12.186043 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.186348 kubelet[2580]: E0124 00:43:12.186329 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.186727 kubelet[2580]: E0124 00:43:12.186694 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.186727 kubelet[2580]: W0124 00:43:12.186709 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.187042 kubelet[2580]: E0124 00:43:12.187016 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.187729 kubelet[2580]: E0124 00:43:12.187583 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.187729 kubelet[2580]: W0124 00:43:12.187642 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.188175 kubelet[2580]: E0124 00:43:12.187993 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.189965 kubelet[2580]: E0124 00:43:12.189781 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.189965 kubelet[2580]: W0124 00:43:12.189833 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.190133 kubelet[2580]: E0124 00:43:12.189996 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.190615 kubelet[2580]: E0124 00:43:12.190552 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.190615 kubelet[2580]: W0124 00:43:12.190564 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.191012 kubelet[2580]: E0124 00:43:12.190695 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.191243 kubelet[2580]: E0124 00:43:12.191215 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.191243 kubelet[2580]: W0124 00:43:12.191229 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.191530 kubelet[2580]: E0124 00:43:12.191376 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.191872 kubelet[2580]: E0124 00:43:12.191794 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.191872 kubelet[2580]: W0124 00:43:12.191849 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.192149 kubelet[2580]: E0124 00:43:12.192132 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.192542 kubelet[2580]: E0124 00:43:12.192498 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.192542 kubelet[2580]: W0124 00:43:12.192513 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.192542 kubelet[2580]: E0124 00:43:12.192530 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.193831 kubelet[2580]: E0124 00:43:12.193793 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.193831 kubelet[2580]: W0124 00:43:12.193832 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.194138 kubelet[2580]: E0124 00:43:12.194054 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.232999 kubelet[2580]: E0124 00:43:12.230657 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:43:12.232999 kubelet[2580]: W0124 00:43:12.230676 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:43:12.232999 kubelet[2580]: E0124 00:43:12.230696 2580 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:43:12.239135 containerd[1467]: time="2026-01-24T00:43:12.239024408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78647dbc74-22zhm,Uid:71276e6c-f8e9-45b6-bbff-df0f04ee24c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b677af1e0462bfdb0d2eb88a97ba35073082b8685f3fdd4989b15497b4d5d8c\"" Jan 24 00:43:12.240795 containerd[1467]: time="2026-01-24T00:43:12.240397618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kfd6z,Uid:d56083c5-83be-41f9-94e7-886b8a9c1787,Namespace:calico-system,Attempt:0,} returns sandbox id \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\"" Jan 24 00:43:12.241446 kubelet[2580]: E0124 00:43:12.241302 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:12.241784 kubelet[2580]: E0124 00:43:12.241725 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:12.243074 containerd[1467]: time="2026-01-24T00:43:12.243057222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:43:12.798509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360296386.mount: Deactivated successfully. Jan 24 00:43:12.911930 containerd[1467]: time="2026-01-24T00:43:12.911855080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:12.913494 containerd[1467]: time="2026-01-24T00:43:12.913385992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 00:43:12.914964 containerd[1467]: time="2026-01-24T00:43:12.914837177Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:12.917480 containerd[1467]: time="2026-01-24T00:43:12.917417123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:12.918382 containerd[1467]: time="2026-01-24T00:43:12.918235594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 675.012004ms" Jan 24 00:43:12.918382 containerd[1467]: time="2026-01-24T00:43:12.918298732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:43:12.920228 containerd[1467]: time="2026-01-24T00:43:12.920149866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:43:12.922303 containerd[1467]: time="2026-01-24T00:43:12.922254012Z" level=info msg="CreateContainer within sandbox \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:43:12.947390 containerd[1467]: time="2026-01-24T00:43:12.947287607Z" level=info msg="CreateContainer within sandbox \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6\"" Jan 24 00:43:12.948635 containerd[1467]: time="2026-01-24T00:43:12.948444397Z" level=info msg="StartContainer for \"32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6\"" Jan 24 00:43:13.015365 systemd[1]: Started cri-containerd-32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6.scope - libcontainer container 32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6. Jan 24 00:43:13.069449 containerd[1467]: time="2026-01-24T00:43:13.069197515Z" level=info msg="StartContainer for \"32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6\" returns successfully" Jan 24 00:43:13.083664 systemd[1]: cri-containerd-32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6.scope: Deactivated successfully. Jan 24 00:43:13.147594 containerd[1467]: time="2026-01-24T00:43:13.147405733Z" level=info msg="shim disconnected" id=32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6 namespace=k8s.io Jan 24 00:43:13.147594 containerd[1467]: time="2026-01-24T00:43:13.147487945Z" level=warning msg="cleaning up after shim disconnected" id=32e95104db1783865ab865d64f357feea43a80e58a806468a1a29b698c05aef6 namespace=k8s.io Jan 24 00:43:13.147594 containerd[1467]: time="2026-01-24T00:43:13.147504315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:43:13.383644 kubelet[2580]: E0124 00:43:13.383035 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:13.621953 kubelet[2580]: E0124 00:43:13.621827 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:14.901676 containerd[1467]: time="2026-01-24T00:43:14.901434373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:14.904157 containerd[1467]: time="2026-01-24T00:43:14.903707507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 24 00:43:14.906430 containerd[1467]: time="2026-01-24T00:43:14.906322237Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:14.913756 containerd[1467]: time="2026-01-24T00:43:14.913671806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:14.916776 containerd[1467]: time="2026-01-24T00:43:14.916207480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.996029542s" Jan 24 00:43:14.916776 containerd[1467]: time="2026-01-24T00:43:14.916494631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:43:14.923404 containerd[1467]: time="2026-01-24T00:43:14.922154736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:43:14.962477 containerd[1467]: time="2026-01-24T00:43:14.962316336Z" level=info msg="CreateContainer within sandbox \"7b677af1e0462bfdb0d2eb88a97ba35073082b8685f3fdd4989b15497b4d5d8c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:43:14.998636 containerd[1467]: time="2026-01-24T00:43:14.998392731Z" level=info msg="CreateContainer within sandbox \"7b677af1e0462bfdb0d2eb88a97ba35073082b8685f3fdd4989b15497b4d5d8c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e9465388366d321de43c518e206d28fd68288fe7012790af4bbb45a57015aad2\"" Jan 24 00:43:15.001780 containerd[1467]: time="2026-01-24T00:43:15.001734827Z" level=info msg="StartContainer for \"e9465388366d321de43c518e206d28fd68288fe7012790af4bbb45a57015aad2\"" Jan 24 00:43:15.054598 systemd[1]: Started cri-containerd-e9465388366d321de43c518e206d28fd68288fe7012790af4bbb45a57015aad2.scope - libcontainer container e9465388366d321de43c518e206d28fd68288fe7012790af4bbb45a57015aad2. Jan 24 00:43:15.182180 containerd[1467]: time="2026-01-24T00:43:15.181344247Z" level=info msg="StartContainer for \"e9465388366d321de43c518e206d28fd68288fe7012790af4bbb45a57015aad2\" returns successfully" Jan 24 00:43:15.383217 kubelet[2580]: E0124 00:43:15.382608 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:15.676958 kubelet[2580]: E0124 00:43:15.676449 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:15.764996 kubelet[2580]: I0124 00:43:15.762014 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78647dbc74-22zhm" podStartSLOduration=2.084215078 podStartE2EDuration="4.76198514s" podCreationTimestamp="2026-01-24 00:43:11 +0000 UTC" firstStartedPulling="2026-01-24 00:43:12.24268579 +0000 UTC m=+24.416887642" lastFinishedPulling="2026-01-24 00:43:14.920455851 +0000 UTC m=+27.094657704" observedRunningTime="2026-01-24 00:43:15.759577348 +0000 UTC m=+27.933779241" watchObservedRunningTime="2026-01-24 00:43:15.76198514 +0000 UTC m=+27.936187004" Jan 24 00:43:16.681128 kubelet[2580]: E0124 00:43:16.680997 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:17.382507 kubelet[2580]: E0124 00:43:17.382341 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:17.682407 kubelet[2580]: E0124 00:43:17.681641 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:18.339088 containerd[1467]: time="2026-01-24T00:43:18.338776129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:18.340635 containerd[1467]: time="2026-01-24T00:43:18.340437646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:43:18.342501 containerd[1467]: time="2026-01-24T00:43:18.342442160Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:18.348305 containerd[1467]: time="2026-01-24T00:43:18.347443440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:18.348305 containerd[1467]: time="2026-01-24T00:43:18.348114524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.42581486s" Jan 24 00:43:18.348305 containerd[1467]: time="2026-01-24T00:43:18.348147495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:43:18.352669 containerd[1467]: time="2026-01-24T00:43:18.352585598Z" level=info msg="CreateContainer within sandbox \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:43:18.379430 containerd[1467]: time="2026-01-24T00:43:18.379347907Z" level=info msg="CreateContainer within sandbox \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b\"" Jan 24 00:43:18.380480 containerd[1467]: time="2026-01-24T00:43:18.380358618Z" level=info msg="StartContainer for \"cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b\"" Jan 24 00:43:18.473663 systemd[1]: run-containerd-runc-k8s.io-cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b-runc.p7sIiJ.mount: Deactivated successfully. Jan 24 00:43:18.491262 systemd[1]: Started cri-containerd-cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b.scope - libcontainer container cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b. Jan 24 00:43:18.556189 containerd[1467]: time="2026-01-24T00:43:18.555986150Z" level=info msg="StartContainer for \"cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b\" returns successfully" Jan 24 00:43:18.690113 kubelet[2580]: E0124 00:43:18.689797 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:19.381183 kubelet[2580]: E0124 00:43:19.381120 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:19.850473 kubelet[2580]: E0124 00:43:19.844260 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:20.403452 systemd[1]: cri-containerd-cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b.scope: Deactivated successfully. Jan 24 00:43:20.404524 systemd[1]: cri-containerd-cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b.scope: Consumed 2.117s CPU time. Jan 24 00:43:20.412439 kubelet[2580]: I0124 00:43:20.412268 2580 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:43:20.454408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b-rootfs.mount: Deactivated successfully. Jan 24 00:43:20.508764 systemd[1]: Created slice kubepods-burstable-podd3c71020_1a24_4b11_83e5_a9fa3d70fc14.slice - libcontainer container kubepods-burstable-podd3c71020_1a24_4b11_83e5_a9fa3d70fc14.slice. Jan 24 00:43:20.529249 systemd[1]: Created slice kubepods-besteffort-podd1ac7bc7_7591_48d6_8111_89103d85ee5f.slice - libcontainer container kubepods-besteffort-podd1ac7bc7_7591_48d6_8111_89103d85ee5f.slice. Jan 24 00:43:20.544659 systemd[1]: Created slice kubepods-burstable-pode3c21f44_8ae7_42a7_a6ec_8b8562e76305.slice - libcontainer container kubepods-burstable-pode3c21f44_8ae7_42a7_a6ec_8b8562e76305.slice. Jan 24 00:43:20.575629 systemd[1]: Created slice kubepods-besteffort-podaa23a976_feaf_4984_bbe7_f5e048e9da19.slice - libcontainer container kubepods-besteffort-podaa23a976_feaf_4984_bbe7_f5e048e9da19.slice. Jan 24 00:43:20.583690 systemd[1]: Created slice kubepods-besteffort-pod9becf02e_a8cd_4e6f_92b4_b46fa4218220.slice - libcontainer container kubepods-besteffort-pod9becf02e_a8cd_4e6f_92b4_b46fa4218220.slice. Jan 24 00:43:20.596091 systemd[1]: Created slice kubepods-besteffort-pod15e93dd9_ce77_43a7_8874_cb6fdf7c51a0.slice - libcontainer container kubepods-besteffort-pod15e93dd9_ce77_43a7_8874_cb6fdf7c51a0.slice. Jan 24 00:43:20.602088 containerd[1467]: time="2026-01-24T00:43:20.601542265Z" level=info msg="shim disconnected" id=cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b namespace=k8s.io Jan 24 00:43:20.602088 containerd[1467]: time="2026-01-24T00:43:20.601885360Z" level=warning msg="cleaning up after shim disconnected" id=cc8dc2548ac490439a40bac4a5591ee9bff86f1aba240ab15ea3b2e3beefb09b namespace=k8s.io Jan 24 00:43:20.602088 containerd[1467]: time="2026-01-24T00:43:20.602069211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:43:20.623137 systemd[1]: Created slice kubepods-besteffort-pod6a3c8225_48cc_431d_9350_25407dc6fc7b.slice - libcontainer container kubepods-besteffort-pod6a3c8225_48cc_431d_9350_25407dc6fc7b.slice. Jan 24 00:43:20.626775 kubelet[2580]: I0124 00:43:20.626660 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9becf02e-a8cd-4e6f-92b4-b46fa4218220-calico-apiserver-certs\") pod \"calico-apiserver-6fc4d58c87-65n6d\" (UID: \"9becf02e-a8cd-4e6f-92b4-b46fa4218220\") " pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" Jan 24 00:43:20.626775 kubelet[2580]: I0124 00:43:20.626773 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa23a976-feaf-4984-bbe7-f5e048e9da19-goldmane-ca-bundle\") pod \"goldmane-666569f655-p7p9p\" (UID: \"aa23a976-feaf-4984-bbe7-f5e048e9da19\") " pod="calico-system/goldmane-666569f655-p7p9p" Jan 24 00:43:20.627144 kubelet[2580]: I0124 00:43:20.626804 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x84v\" (UniqueName: \"kubernetes.io/projected/d3c71020-1a24-4b11-83e5-a9fa3d70fc14-kube-api-access-7x84v\") pod \"coredns-668d6bf9bc-mxrqc\" (UID: \"d3c71020-1a24-4b11-83e5-a9fa3d70fc14\") " pod="kube-system/coredns-668d6bf9bc-mxrqc" Jan 24 00:43:20.627144 kubelet[2580]: I0124 00:43:20.626835 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1ac7bc7-7591-48d6-8111-89103d85ee5f-tigera-ca-bundle\") pod \"calico-kube-controllers-664696c7bc-cdlnv\" (UID: \"d1ac7bc7-7591-48d6-8111-89103d85ee5f\") " pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" Jan 24 00:43:20.627144 kubelet[2580]: I0124 00:43:20.626862 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qhtg\" (UniqueName: \"kubernetes.io/projected/6a3c8225-48cc-431d-9350-25407dc6fc7b-kube-api-access-9qhtg\") pod \"calico-apiserver-6fc4d58c87-h7n7g\" (UID: \"6a3c8225-48cc-431d-9350-25407dc6fc7b\") " pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" Jan 24 00:43:20.627144 kubelet[2580]: I0124 00:43:20.627021 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3c71020-1a24-4b11-83e5-a9fa3d70fc14-config-volume\") pod \"coredns-668d6bf9bc-mxrqc\" (UID: \"d3c71020-1a24-4b11-83e5-a9fa3d70fc14\") " pod="kube-system/coredns-668d6bf9bc-mxrqc" Jan 24 00:43:20.627144 kubelet[2580]: I0124 00:43:20.627115 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/9becf02e-a8cd-4e6f-92b4-b46fa4218220-kube-api-access-9gjzm\") pod \"calico-apiserver-6fc4d58c87-65n6d\" (UID: \"9becf02e-a8cd-4e6f-92b4-b46fa4218220\") " pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" Jan 24 00:43:20.627382 kubelet[2580]: I0124 00:43:20.627148 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-backend-key-pair\") pod \"whisker-7bd658f48f-9r589\" (UID: \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\") " pod="calico-system/whisker-7bd658f48f-9r589" Jan 24 00:43:20.627382 kubelet[2580]: I0124 00:43:20.627179 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mwf\" (UniqueName: \"kubernetes.io/projected/aa23a976-feaf-4984-bbe7-f5e048e9da19-kube-api-access-78mwf\") pod \"goldmane-666569f655-p7p9p\" (UID: \"aa23a976-feaf-4984-bbe7-f5e048e9da19\") " pod="calico-system/goldmane-666569f655-p7p9p" Jan 24 00:43:20.627382 kubelet[2580]: I0124 00:43:20.627208 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-ca-bundle\") pod \"whisker-7bd658f48f-9r589\" (UID: \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\") " pod="calico-system/whisker-7bd658f48f-9r589" Jan 24 00:43:20.627382 kubelet[2580]: I0124 00:43:20.627237 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm8nf\" (UniqueName: \"kubernetes.io/projected/d1ac7bc7-7591-48d6-8111-89103d85ee5f-kube-api-access-xm8nf\") pod \"calico-kube-controllers-664696c7bc-cdlnv\" (UID: \"d1ac7bc7-7591-48d6-8111-89103d85ee5f\") " pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" Jan 24 00:43:20.627382 kubelet[2580]: I0124 00:43:20.627262 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c21f44-8ae7-42a7-a6ec-8b8562e76305-config-volume\") pod \"coredns-668d6bf9bc-dzdbl\" (UID: \"e3c21f44-8ae7-42a7-a6ec-8b8562e76305\") " pod="kube-system/coredns-668d6bf9bc-dzdbl" Jan 24 00:43:20.627613 kubelet[2580]: I0124 00:43:20.627290 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a3c8225-48cc-431d-9350-25407dc6fc7b-calico-apiserver-certs\") pod \"calico-apiserver-6fc4d58c87-h7n7g\" (UID: \"6a3c8225-48cc-431d-9350-25407dc6fc7b\") " pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" Jan 24 00:43:20.627613 kubelet[2580]: I0124 00:43:20.627314 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdkt\" (UniqueName: \"kubernetes.io/projected/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-kube-api-access-qbdkt\") pod \"whisker-7bd658f48f-9r589\" (UID: \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\") " pod="calico-system/whisker-7bd658f48f-9r589" Jan 24 00:43:20.627613 kubelet[2580]: I0124 00:43:20.627336 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa23a976-feaf-4984-bbe7-f5e048e9da19-config\") pod \"goldmane-666569f655-p7p9p\" (UID: \"aa23a976-feaf-4984-bbe7-f5e048e9da19\") " pod="calico-system/goldmane-666569f655-p7p9p" Jan 24 00:43:20.627613 kubelet[2580]: I0124 00:43:20.627363 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aa23a976-feaf-4984-bbe7-f5e048e9da19-goldmane-key-pair\") pod \"goldmane-666569f655-p7p9p\" (UID: \"aa23a976-feaf-4984-bbe7-f5e048e9da19\") " pod="calico-system/goldmane-666569f655-p7p9p" Jan 24 00:43:20.627613 kubelet[2580]: I0124 00:43:20.627390 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szv8k\" (UniqueName: \"kubernetes.io/projected/e3c21f44-8ae7-42a7-a6ec-8b8562e76305-kube-api-access-szv8k\") pod \"coredns-668d6bf9bc-dzdbl\" (UID: \"e3c21f44-8ae7-42a7-a6ec-8b8562e76305\") " pod="kube-system/coredns-668d6bf9bc-dzdbl" Jan 24 00:43:20.820741 kubelet[2580]: E0124 00:43:20.820663 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:20.822535 containerd[1467]: time="2026-01-24T00:43:20.821657447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mxrqc,Uid:d3c71020-1a24-4b11-83e5-a9fa3d70fc14,Namespace:kube-system,Attempt:0,}" Jan 24 00:43:20.838879 containerd[1467]: time="2026-01-24T00:43:20.838605513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664696c7bc-cdlnv,Uid:d1ac7bc7-7591-48d6-8111-89103d85ee5f,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:20.844814 kubelet[2580]: E0124 00:43:20.843470 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:20.845504 containerd[1467]: time="2026-01-24T00:43:20.845467500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:43:20.851187 kubelet[2580]: E0124 00:43:20.850797 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:20.853718 containerd[1467]: time="2026-01-24T00:43:20.853250911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzdbl,Uid:e3c21f44-8ae7-42a7-a6ec-8b8562e76305,Namespace:kube-system,Attempt:0,}" Jan 24 00:43:20.883532 containerd[1467]: time="2026-01-24T00:43:20.883469707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p7p9p,Uid:aa23a976-feaf-4984-bbe7-f5e048e9da19,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:20.898882 containerd[1467]: time="2026-01-24T00:43:20.898482513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-65n6d,Uid:9becf02e-a8cd-4e6f-92b4-b46fa4218220,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:43:20.907649 containerd[1467]: time="2026-01-24T00:43:20.907508356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd658f48f-9r589,Uid:15e93dd9-ce77-43a7-8874-cb6fdf7c51a0,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:20.935879 containerd[1467]: time="2026-01-24T00:43:20.935654930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-h7n7g,Uid:6a3c8225-48cc-431d-9350-25407dc6fc7b,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:43:21.383348 containerd[1467]: time="2026-01-24T00:43:21.371840420Z" level=error msg="Failed to destroy network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.396373 systemd[1]: Created slice kubepods-besteffort-pod3935770d_1f88_434a_a13a_250f66f25ebf.slice - libcontainer container kubepods-besteffort-pod3935770d_1f88_434a_a13a_250f66f25ebf.slice. Jan 24 00:43:21.397337 containerd[1467]: time="2026-01-24T00:43:21.396797031Z" level=error msg="Failed to destroy network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.399789 containerd[1467]: time="2026-01-24T00:43:21.399531350Z" level=error msg="encountered an error cleaning up failed sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.399789 containerd[1467]: time="2026-01-24T00:43:21.399636594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mxrqc,Uid:d3c71020-1a24-4b11-83e5-a9fa3d70fc14,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.402823 containerd[1467]: time="2026-01-24T00:43:21.402766750Z" level=error msg="Failed to destroy network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.406173 containerd[1467]: time="2026-01-24T00:43:21.406134302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xzhgv,Uid:3935770d-1f88-434a-a13a-250f66f25ebf,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:21.406726 containerd[1467]: time="2026-01-24T00:43:21.406260533Z" level=error msg="encountered an error cleaning up failed sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.408257 containerd[1467]: time="2026-01-24T00:43:21.408170503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzdbl,Uid:e3c21f44-8ae7-42a7-a6ec-8b8562e76305,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.418783 containerd[1467]: time="2026-01-24T00:43:21.418673582Z" level=error msg="Failed to destroy network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.421980 containerd[1467]: time="2026-01-24T00:43:21.421848347Z" level=error msg="encountered an error cleaning up failed sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.421980 containerd[1467]: time="2026-01-24T00:43:21.421992906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd658f48f-9r589,Uid:15e93dd9-ce77-43a7-8874-cb6fdf7c51a0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.422326 containerd[1467]: time="2026-01-24T00:43:21.422239054Z" level=error msg="encountered an error cleaning up failed sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.422545 kubelet[2580]: E0124 00:43:21.422401 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.422545 kubelet[2580]: E0124 00:43:21.422527 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bd658f48f-9r589" Jan 24 00:43:21.422715 kubelet[2580]: E0124 00:43:21.422622 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bd658f48f-9r589" Jan 24 00:43:21.422820 kubelet[2580]: E0124 00:43:21.422730 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bd658f48f-9r589_calico-system(15e93dd9-ce77-43a7-8874-cb6fdf7c51a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bd658f48f-9r589_calico-system(15e93dd9-ce77-43a7-8874-cb6fdf7c51a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bd658f48f-9r589" podUID="15e93dd9-ce77-43a7-8874-cb6fdf7c51a0" Jan 24 00:43:21.423298 containerd[1467]: time="2026-01-24T00:43:21.423111652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-h7n7g,Uid:6a3c8225-48cc-431d-9350-25407dc6fc7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.423441 kubelet[2580]: E0124 00:43:21.423280 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.423441 kubelet[2580]: E0124 00:43:21.423312 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mxrqc" Jan 24 00:43:21.423441 kubelet[2580]: E0124 00:43:21.423329 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mxrqc" Jan 24 00:43:21.423578 kubelet[2580]: E0124 00:43:21.423367 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mxrqc_kube-system(d3c71020-1a24-4b11-83e5-a9fa3d70fc14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mxrqc_kube-system(d3c71020-1a24-4b11-83e5-a9fa3d70fc14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mxrqc" podUID="d3c71020-1a24-4b11-83e5-a9fa3d70fc14" Jan 24 00:43:21.423578 kubelet[2580]: E0124 00:43:21.423418 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.423578 kubelet[2580]: E0124 00:43:21.423436 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dzdbl" Jan 24 00:43:21.423790 kubelet[2580]: E0124 00:43:21.423446 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dzdbl" Jan 24 00:43:21.423790 kubelet[2580]: E0124 00:43:21.423489 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dzdbl_kube-system(e3c21f44-8ae7-42a7-a6ec-8b8562e76305)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dzdbl_kube-system(e3c21f44-8ae7-42a7-a6ec-8b8562e76305)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dzdbl" podUID="e3c21f44-8ae7-42a7-a6ec-8b8562e76305" Jan 24 00:43:21.424806 containerd[1467]: time="2026-01-24T00:43:21.424693013Z" level=error msg="Failed to destroy network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.425739 kubelet[2580]: E0124 00:43:21.425696 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.425813 kubelet[2580]: E0124 00:43:21.425759 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" Jan 24 00:43:21.425813 kubelet[2580]: E0124 00:43:21.425786 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" Jan 24 00:43:21.426246 kubelet[2580]: E0124 00:43:21.425828 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc4d58c87-h7n7g_calico-apiserver(6a3c8225-48cc-431d-9350-25407dc6fc7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc4d58c87-h7n7g_calico-apiserver(6a3c8225-48cc-431d-9350-25407dc6fc7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:43:21.427080 containerd[1467]: time="2026-01-24T00:43:21.426954726Z" level=error msg="encountered an error cleaning up failed sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.427245 containerd[1467]: time="2026-01-24T00:43:21.427077965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p7p9p,Uid:aa23a976-feaf-4984-bbe7-f5e048e9da19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.428573 kubelet[2580]: E0124 00:43:21.427483 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.428573 kubelet[2580]: E0124 00:43:21.428139 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p7p9p" Jan 24 00:43:21.429071 kubelet[2580]: E0124 00:43:21.428838 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p7p9p" Jan 24 00:43:21.429623 kubelet[2580]: E0124 00:43:21.429263 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-p7p9p_calico-system(aa23a976-feaf-4984-bbe7-f5e048e9da19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-p7p9p_calico-system(aa23a976-feaf-4984-bbe7-f5e048e9da19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:43:21.430765 containerd[1467]: time="2026-01-24T00:43:21.430730447Z" level=error msg="Failed to destroy network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.459676 containerd[1467]: time="2026-01-24T00:43:21.431835374Z" level=error msg="encountered an error cleaning up failed sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.460087 containerd[1467]: time="2026-01-24T00:43:21.437346235Z" level=error msg="Failed to destroy network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.460619 containerd[1467]: time="2026-01-24T00:43:21.460458783Z" level=error msg="encountered an error cleaning up failed sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.460773 containerd[1467]: time="2026-01-24T00:43:21.460708765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664696c7bc-cdlnv,Uid:d1ac7bc7-7591-48d6-8111-89103d85ee5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.461011 containerd[1467]: time="2026-01-24T00:43:21.460797851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-65n6d,Uid:9becf02e-a8cd-4e6f-92b4-b46fa4218220,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.461204 kubelet[2580]: E0124 00:43:21.461088 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.461204 kubelet[2580]: E0124 00:43:21.461149 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" Jan 24 00:43:21.461204 kubelet[2580]: E0124 00:43:21.461168 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" Jan 24 00:43:21.461387 kubelet[2580]: E0124 00:43:21.461205 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc4d58c87-65n6d_calico-apiserver(9becf02e-a8cd-4e6f-92b4-b46fa4218220)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc4d58c87-65n6d_calico-apiserver(9becf02e-a8cd-4e6f-92b4-b46fa4218220)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:43:21.466523 kubelet[2580]: E0124 00:43:21.466432 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.466840 kubelet[2580]: E0124 00:43:21.466526 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" Jan 24 00:43:21.466840 kubelet[2580]: E0124 00:43:21.466559 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" Jan 24 00:43:21.466840 kubelet[2580]: E0124 00:43:21.466631 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-664696c7bc-cdlnv_calico-system(d1ac7bc7-7591-48d6-8111-89103d85ee5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-664696c7bc-cdlnv_calico-system(d1ac7bc7-7591-48d6-8111-89103d85ee5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:43:21.470518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028-shm.mount: Deactivated successfully. Jan 24 00:43:21.666772 containerd[1467]: time="2026-01-24T00:43:21.663750549Z" level=error msg="Failed to destroy network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.666772 containerd[1467]: time="2026-01-24T00:43:21.664774688Z" level=error msg="encountered an error cleaning up failed sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.666772 containerd[1467]: time="2026-01-24T00:43:21.665172094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xzhgv,Uid:3935770d-1f88-434a-a13a-250f66f25ebf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.667426 kubelet[2580]: E0124 00:43:21.666003 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:21.667426 kubelet[2580]: E0124 00:43:21.666195 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:21.667426 kubelet[2580]: E0124 00:43:21.666225 2580 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xzhgv" Jan 24 00:43:21.667616 kubelet[2580]: E0124 00:43:21.666282 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:21.670791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7-shm.mount: Deactivated successfully. Jan 24 00:43:21.858588 kubelet[2580]: I0124 00:43:21.858381 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:21.861369 kubelet[2580]: I0124 00:43:21.861155 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:21.866117 kubelet[2580]: I0124 00:43:21.866009 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:21.884649 containerd[1467]: time="2026-01-24T00:43:21.883785450Z" level=info msg="StopPodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\"" Jan 24 00:43:21.885009 containerd[1467]: time="2026-01-24T00:43:21.884703862Z" level=info msg="StopPodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\"" Jan 24 00:43:21.886115 kubelet[2580]: I0124 00:43:21.885866 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:21.886297 containerd[1467]: time="2026-01-24T00:43:21.886126114Z" level=info msg="StopPodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\"" Jan 24 00:43:21.889125 containerd[1467]: time="2026-01-24T00:43:21.888882972Z" level=info msg="Ensure that sandbox f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4 in task-service has been cleanup successfully" Jan 24 00:43:21.895334 kubelet[2580]: I0124 00:43:21.895292 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:21.905273 containerd[1467]: time="2026-01-24T00:43:21.889009393Z" level=info msg="Ensure that sandbox c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046 in task-service has been cleanup successfully" Jan 24 00:43:21.905661 kubelet[2580]: I0124 00:43:21.905632 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:21.906845 containerd[1467]: time="2026-01-24T00:43:21.890209498Z" level=info msg="StopPodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\"" Jan 24 00:43:21.909090 containerd[1467]: time="2026-01-24T00:43:21.907147304Z" level=info msg="StopPodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\"" Jan 24 00:43:21.909090 containerd[1467]: time="2026-01-24T00:43:21.908096564Z" level=info msg="Ensure that sandbox 152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196 in task-service has been cleanup successfully" Jan 24 00:43:21.909461 containerd[1467]: time="2026-01-24T00:43:21.909360270Z" level=info msg="Ensure that sandbox 6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf in task-service has been cleanup successfully" Jan 24 00:43:21.920222 containerd[1467]: time="2026-01-24T00:43:21.889097687Z" level=info msg="Ensure that sandbox 4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028 in task-service has been cleanup successfully" Jan 24 00:43:21.922085 containerd[1467]: time="2026-01-24T00:43:21.902354343Z" level=info msg="StopPodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\"" Jan 24 00:43:21.922085 containerd[1467]: time="2026-01-24T00:43:21.921555405Z" level=info msg="Ensure that sandbox e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7 in task-service has been cleanup successfully" Jan 24 00:43:21.933011 kubelet[2580]: I0124 00:43:21.932824 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:21.938847 containerd[1467]: time="2026-01-24T00:43:21.938669284Z" level=info msg="StopPodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\"" Jan 24 00:43:21.940401 containerd[1467]: time="2026-01-24T00:43:21.939743446Z" level=info msg="Ensure that sandbox 87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1 in task-service has been cleanup successfully" Jan 24 00:43:21.967537 kubelet[2580]: I0124 00:43:21.967363 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:21.968776 containerd[1467]: time="2026-01-24T00:43:21.968599314Z" level=info msg="StopPodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\"" Jan 24 00:43:21.969134 containerd[1467]: time="2026-01-24T00:43:21.969105893Z" level=info msg="Ensure that sandbox 585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62 in task-service has been cleanup successfully" Jan 24 00:43:22.078420 containerd[1467]: time="2026-01-24T00:43:22.078293774Z" level=error msg="StopPodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" failed" error="failed to destroy network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.078710 kubelet[2580]: E0124 00:43:22.078627 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:22.078834 kubelet[2580]: E0124 00:43:22.078715 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196"} Jan 24 00:43:22.078834 kubelet[2580]: E0124 00:43:22.078801 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3c71020-1a24-4b11-83e5-a9fa3d70fc14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.079175 kubelet[2580]: E0124 00:43:22.078839 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3c71020-1a24-4b11-83e5-a9fa3d70fc14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mxrqc" podUID="d3c71020-1a24-4b11-83e5-a9fa3d70fc14" Jan 24 00:43:22.081075 containerd[1467]: time="2026-01-24T00:43:22.080790041Z" level=error msg="StopPodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" failed" error="failed to destroy network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.082140 kubelet[2580]: E0124 00:43:22.081719 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:22.082417 kubelet[2580]: E0124 00:43:22.082180 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1"} Jan 24 00:43:22.082592 kubelet[2580]: E0124 00:43:22.082513 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.082827 kubelet[2580]: E0124 00:43:22.082680 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bd658f48f-9r589" podUID="15e93dd9-ce77-43a7-8874-cb6fdf7c51a0" Jan 24 00:43:22.090145 containerd[1467]: time="2026-01-24T00:43:22.088659820Z" level=error msg="StopPodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" failed" error="failed to destroy network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.090145 containerd[1467]: time="2026-01-24T00:43:22.088766214Z" level=error msg="StopPodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" failed" error="failed to destroy network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.090145 containerd[1467]: time="2026-01-24T00:43:22.088856632Z" level=error msg="StopPodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" failed" error="failed to destroy network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.090145 containerd[1467]: time="2026-01-24T00:43:22.088875429Z" level=error msg="StopPodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" failed" error="failed to destroy network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.090408 kubelet[2580]: E0124 00:43:22.089411 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:22.090408 kubelet[2580]: E0124 00:43:22.089427 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:22.090408 kubelet[2580]: E0124 00:43:22.089477 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf"} Jan 24 00:43:22.090408 kubelet[2580]: E0124 00:43:22.089508 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:22.090408 kubelet[2580]: E0124 00:43:22.089534 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7"} Jan 24 00:43:22.090593 kubelet[2580]: E0124 00:43:22.089566 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3935770d-1f88-434a-a13a-250f66f25ebf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.090593 kubelet[2580]: E0124 00:43:22.089602 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3935770d-1f88-434a-a13a-250f66f25ebf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:22.090593 kubelet[2580]: E0124 00:43:22.089478 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046"} Jan 24 00:43:22.090593 kubelet[2580]: E0124 00:43:22.089660 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9becf02e-a8cd-4e6f-92b4-b46fa4218220\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.091142 kubelet[2580]: E0124 00:43:22.089684 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9becf02e-a8cd-4e6f-92b4-b46fa4218220\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:43:22.091142 kubelet[2580]: E0124 00:43:22.089526 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a3c8225-48cc-431d-9350-25407dc6fc7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.091142 kubelet[2580]: E0124 00:43:22.089727 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a3c8225-48cc-431d-9350-25407dc6fc7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:43:22.091487 kubelet[2580]: E0124 00:43:22.089831 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:22.091487 kubelet[2580]: E0124 00:43:22.090005 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4"} Jan 24 00:43:22.091487 kubelet[2580]: E0124 00:43:22.090103 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3c21f44-8ae7-42a7-a6ec-8b8562e76305\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.091487 kubelet[2580]: E0124 00:43:22.090134 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3c21f44-8ae7-42a7-a6ec-8b8562e76305\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dzdbl" podUID="e3c21f44-8ae7-42a7-a6ec-8b8562e76305" Jan 24 00:43:22.091871 containerd[1467]: time="2026-01-24T00:43:22.091829237Z" level=error msg="StopPodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" failed" error="failed to destroy network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.092720 kubelet[2580]: E0124 00:43:22.092313 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:22.092720 kubelet[2580]: E0124 00:43:22.092346 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028"} Jan 24 00:43:22.092720 kubelet[2580]: E0124 00:43:22.092381 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1ac7bc7-7591-48d6-8111-89103d85ee5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.092720 kubelet[2580]: E0124 00:43:22.092408 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1ac7bc7-7591-48d6-8111-89103d85ee5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:43:22.099642 containerd[1467]: time="2026-01-24T00:43:22.099354565Z" level=error msg="StopPodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" failed" error="failed to destroy network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:43:22.100667 kubelet[2580]: E0124 00:43:22.100425 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:22.100667 kubelet[2580]: E0124 00:43:22.100648 2580 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62"} Jan 24 00:43:22.100795 kubelet[2580]: E0124 00:43:22.100693 2580 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa23a976-feaf-4984-bbe7-f5e048e9da19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:43:22.100795 kubelet[2580]: E0124 00:43:22.100722 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa23a976-feaf-4984-bbe7-f5e048e9da19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:43:29.762673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095205554.mount: Deactivated successfully. Jan 24 00:43:29.929598 containerd[1467]: time="2026-01-24T00:43:29.929297037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:29.931777 containerd[1467]: time="2026-01-24T00:43:29.931723631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:43:29.936256 containerd[1467]: time="2026-01-24T00:43:29.934362521Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:29.948292 containerd[1467]: time="2026-01-24T00:43:29.947659959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:43:29.954218 containerd[1467]: time="2026-01-24T00:43:29.951141537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.105473655s" Jan 24 00:43:29.954218 containerd[1467]: time="2026-01-24T00:43:29.951233859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:43:29.984634 containerd[1467]: time="2026-01-24T00:43:29.983696757Z" level=info msg="CreateContainer within sandbox \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:43:30.032584 containerd[1467]: time="2026-01-24T00:43:30.032394758Z" level=info msg="CreateContainer within sandbox \"3bf4c095ad463ff4fb83bc2e0981d784f63629b41cbb86aabe1ba2d05a7147e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"45e21df47c4d3996b18f90d154cf0eb998c3a2f66d4599cb54a132047584ce1a\"" Jan 24 00:43:30.036443 containerd[1467]: time="2026-01-24T00:43:30.035259383Z" level=info msg="StartContainer for \"45e21df47c4d3996b18f90d154cf0eb998c3a2f66d4599cb54a132047584ce1a\"" Jan 24 00:43:30.142446 systemd[1]: Started cri-containerd-45e21df47c4d3996b18f90d154cf0eb998c3a2f66d4599cb54a132047584ce1a.scope - libcontainer container 45e21df47c4d3996b18f90d154cf0eb998c3a2f66d4599cb54a132047584ce1a. Jan 24 00:43:30.276475 containerd[1467]: time="2026-01-24T00:43:30.276405444Z" level=info msg="StartContainer for \"45e21df47c4d3996b18f90d154cf0eb998c3a2f66d4599cb54a132047584ce1a\" returns successfully" Jan 24 00:43:30.435993 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:43:30.437338 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:43:30.731170 containerd[1467]: time="2026-01-24T00:43:30.730253512Z" level=info msg="StopPodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\"" Jan 24 00:43:31.033757 kubelet[2580]: E0124 00:43:31.032458 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:30.950 [INFO][3905] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:30.951 [INFO][3905] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" iface="eth0" netns="/var/run/netns/cni-d4ee2d24-ceb8-b944-8445-a2dafe44939d" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:30.955 [INFO][3905] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" iface="eth0" netns="/var/run/netns/cni-d4ee2d24-ceb8-b944-8445-a2dafe44939d" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:30.956 [INFO][3905] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" iface="eth0" netns="/var/run/netns/cni-d4ee2d24-ceb8-b944-8445-a2dafe44939d" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:30.956 [INFO][3905] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:30.956 [INFO][3905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.194 [INFO][3915] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.197 [INFO][3915] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.197 [INFO][3915] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.243 [WARNING][3915] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.244 [INFO][3915] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.250 [INFO][3915] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:31.268108 containerd[1467]: 2026-01-24 00:43:31.261 [INFO][3905] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:31.270676 containerd[1467]: time="2026-01-24T00:43:31.269460007Z" level=info msg="TearDown network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" successfully" Jan 24 00:43:31.270676 containerd[1467]: time="2026-01-24T00:43:31.269506254Z" level=info msg="StopPodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" returns successfully" Jan 24 00:43:31.276506 systemd[1]: run-netns-cni\x2dd4ee2d24\x2dceb8\x2db944\x2d8445\x2da2dafe44939d.mount: Deactivated successfully. Jan 24 00:43:31.358187 kubelet[2580]: I0124 00:43:31.357282 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-ca-bundle\") pod \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\" (UID: \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\") " Jan 24 00:43:31.358187 kubelet[2580]: I0124 00:43:31.357365 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbdkt\" (UniqueName: \"kubernetes.io/projected/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-kube-api-access-qbdkt\") pod \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\" (UID: \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\") " Jan 24 00:43:31.358187 kubelet[2580]: I0124 00:43:31.357408 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-backend-key-pair\") pod \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\" (UID: \"15e93dd9-ce77-43a7-8874-cb6fdf7c51a0\") " Jan 24 00:43:31.358187 kubelet[2580]: I0124 00:43:31.358099 2580 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "15e93dd9-ce77-43a7-8874-cb6fdf7c51a0" (UID: "15e93dd9-ce77-43a7-8874-cb6fdf7c51a0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:43:31.368514 systemd[1]: var-lib-kubelet-pods-15e93dd9\x2dce77\x2d43a7\x2d8874\x2dcb6fdf7c51a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqbdkt.mount: Deactivated successfully. Jan 24 00:43:31.370863 kubelet[2580]: I0124 00:43:31.369584 2580 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "15e93dd9-ce77-43a7-8874-cb6fdf7c51a0" (UID: "15e93dd9-ce77-43a7-8874-cb6fdf7c51a0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:43:31.372250 kubelet[2580]: I0124 00:43:31.372131 2580 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-kube-api-access-qbdkt" (OuterVolumeSpecName: "kube-api-access-qbdkt") pod "15e93dd9-ce77-43a7-8874-cb6fdf7c51a0" (UID: "15e93dd9-ce77-43a7-8874-cb6fdf7c51a0"). InnerVolumeSpecName "kube-api-access-qbdkt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:43:31.375699 systemd[1]: var-lib-kubelet-pods-15e93dd9\x2dce77\x2d43a7\x2d8874\x2dcb6fdf7c51a0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:43:31.458688 kubelet[2580]: I0124 00:43:31.458251 2580 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 24 00:43:31.458688 kubelet[2580]: I0124 00:43:31.458308 2580 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qbdkt\" (UniqueName: \"kubernetes.io/projected/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-kube-api-access-qbdkt\") on node \"localhost\" DevicePath \"\"" Jan 24 00:43:31.458688 kubelet[2580]: I0124 00:43:31.458327 2580 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 24 00:43:32.039043 kubelet[2580]: E0124 00:43:32.038281 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:32.047363 systemd[1]: Removed slice kubepods-besteffort-pod15e93dd9_ce77_43a7_8874_cb6fdf7c51a0.slice - libcontainer container kubepods-besteffort-pod15e93dd9_ce77_43a7_8874_cb6fdf7c51a0.slice. Jan 24 00:43:32.076343 kubelet[2580]: I0124 00:43:32.076229 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kfd6z" podStartSLOduration=3.3606301800000002 podStartE2EDuration="21.076202865s" podCreationTimestamp="2026-01-24 00:43:11 +0000 UTC" firstStartedPulling="2026-01-24 00:43:12.242542495 +0000 UTC m=+24.416744348" lastFinishedPulling="2026-01-24 00:43:29.958115179 +0000 UTC m=+42.132317033" observedRunningTime="2026-01-24 00:43:31.089111838 +0000 UTC m=+43.263313711" watchObservedRunningTime="2026-01-24 00:43:32.076202865 +0000 UTC m=+44.250404728" Jan 24 00:43:32.189392 systemd[1]: Created slice kubepods-besteffort-pod56fae327_01cc_4cd0_849f_72d480e4300e.slice - libcontainer container kubepods-besteffort-pod56fae327_01cc_4cd0_849f_72d480e4300e.slice. Jan 24 00:43:32.276828 kubelet[2580]: I0124 00:43:32.276645 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rv2\" (UniqueName: \"kubernetes.io/projected/56fae327-01cc-4cd0-849f-72d480e4300e-kube-api-access-p9rv2\") pod \"whisker-7986b6bf7-c6tcd\" (UID: \"56fae327-01cc-4cd0-849f-72d480e4300e\") " pod="calico-system/whisker-7986b6bf7-c6tcd" Jan 24 00:43:32.276828 kubelet[2580]: I0124 00:43:32.276840 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56fae327-01cc-4cd0-849f-72d480e4300e-whisker-backend-key-pair\") pod \"whisker-7986b6bf7-c6tcd\" (UID: \"56fae327-01cc-4cd0-849f-72d480e4300e\") " pod="calico-system/whisker-7986b6bf7-c6tcd" Jan 24 00:43:32.278283 kubelet[2580]: I0124 00:43:32.276870 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fae327-01cc-4cd0-849f-72d480e4300e-whisker-ca-bundle\") pod \"whisker-7986b6bf7-c6tcd\" (UID: \"56fae327-01cc-4cd0-849f-72d480e4300e\") " pod="calico-system/whisker-7986b6bf7-c6tcd" Jan 24 00:43:32.386503 containerd[1467]: time="2026-01-24T00:43:32.384831651Z" level=info msg="StopPodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\"" Jan 24 00:43:32.421961 kubelet[2580]: I0124 00:43:32.415245 2580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e93dd9-ce77-43a7-8874-cb6fdf7c51a0" path="/var/lib/kubelet/pods/15e93dd9-ce77-43a7-8874-cb6fdf7c51a0/volumes" Jan 24 00:43:32.496397 containerd[1467]: time="2026-01-24T00:43:32.496270178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7986b6bf7-c6tcd,Uid:56fae327-01cc-4cd0-849f-72d480e4300e,Namespace:calico-system,Attempt:0,}" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.614 [INFO][4054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.617 [INFO][4054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" iface="eth0" netns="/var/run/netns/cni-7e62c416-e40e-c029-1ee8-1e603df0b942" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.617 [INFO][4054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" iface="eth0" netns="/var/run/netns/cni-7e62c416-e40e-c029-1ee8-1e603df0b942" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.619 [INFO][4054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" iface="eth0" netns="/var/run/netns/cni-7e62c416-e40e-c029-1ee8-1e603df0b942" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.619 [INFO][4054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.619 [INFO][4054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.720 [INFO][4107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.722 [INFO][4107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.722 [INFO][4107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.741 [WARNING][4107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.742 [INFO][4107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.749 [INFO][4107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:32.796130 containerd[1467]: 2026-01-24 00:43:32.768 [INFO][4054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:32.796130 containerd[1467]: time="2026-01-24T00:43:32.796063725Z" level=info msg="TearDown network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" successfully" Jan 24 00:43:32.796130 containerd[1467]: time="2026-01-24T00:43:32.796098770Z" level=info msg="StopPodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" returns successfully" Jan 24 00:43:32.801155 kubelet[2580]: E0124 00:43:32.800463 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:32.801381 containerd[1467]: time="2026-01-24T00:43:32.801302026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzdbl,Uid:e3c21f44-8ae7-42a7-a6ec-8b8562e76305,Namespace:kube-system,Attempt:1,}" Jan 24 00:43:32.804758 systemd[1]: run-netns-cni\x2d7e62c416\x2de40e\x2dc029\x2d1ee8\x2d1e603df0b942.mount: Deactivated successfully. Jan 24 00:43:33.274833 systemd-networkd[1405]: calic1847b362dc: Link UP Jan 24 00:43:33.282533 systemd-networkd[1405]: calic1847b362dc: Gained carrier Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.624 [INFO][4084] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.654 [INFO][4084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7986b6bf7--c6tcd-eth0 whisker-7986b6bf7- calico-system 56fae327-01cc-4cd0-849f-72d480e4300e 988 0 2026-01-24 00:43:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7986b6bf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7986b6bf7-c6tcd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic1847b362dc [] [] }} ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.654 [INFO][4084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.787 [INFO][4120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" HandleID="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Workload="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.788 [INFO][4120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" HandleID="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Workload="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7986b6bf7-c6tcd", "timestamp":"2026-01-24 00:43:32.787660549 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.788 [INFO][4120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.788 [INFO][4120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.788 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.815 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.851 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.879 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.888 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.901 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.901 [INFO][4120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.911 [INFO][4120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:32.932 [INFO][4120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:33.212 [INFO][4120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:33.212 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" host="localhost" Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:33.212 [INFO][4120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:33.320403 containerd[1467]: 2026-01-24 00:43:33.212 [INFO][4120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" HandleID="k8s-pod-network.6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Workload="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.321764 containerd[1467]: 2026-01-24 00:43:33.227 [INFO][4084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7986b6bf7--c6tcd-eth0", GenerateName:"whisker-7986b6bf7-", Namespace:"calico-system", SelfLink:"", UID:"56fae327-01cc-4cd0-849f-72d480e4300e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7986b6bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7986b6bf7-c6tcd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1847b362dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:33.321764 containerd[1467]: 2026-01-24 00:43:33.227 [INFO][4084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.321764 containerd[1467]: 2026-01-24 00:43:33.228 [INFO][4084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1847b362dc ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.321764 containerd[1467]: 2026-01-24 00:43:33.280 [INFO][4084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.321764 containerd[1467]: 2026-01-24 00:43:33.285 [INFO][4084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7986b6bf7--c6tcd-eth0", GenerateName:"whisker-7986b6bf7-", Namespace:"calico-system", SelfLink:"", UID:"56fae327-01cc-4cd0-849f-72d480e4300e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7986b6bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b", Pod:"whisker-7986b6bf7-c6tcd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1847b362dc", MAC:"ce:24:26:06:12:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:33.321764 containerd[1467]: 2026-01-24 00:43:33.313 [INFO][4084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b" Namespace="calico-system" Pod="whisker-7986b6bf7-c6tcd" WorkloadEndpoint="localhost-k8s-whisker--7986b6bf7--c6tcd-eth0" Jan 24 00:43:33.390106 containerd[1467]: time="2026-01-24T00:43:33.389874945Z" level=info msg="StopPodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\"" Jan 24 00:43:33.396355 containerd[1467]: time="2026-01-24T00:43:33.393250880Z" level=info msg="StopPodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\"" Jan 24 00:43:33.506387 kernel: bpftool[4236]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:43:33.521370 containerd[1467]: time="2026-01-24T00:43:33.519883981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:33.527227 containerd[1467]: time="2026-01-24T00:43:33.523719054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:33.527227 containerd[1467]: time="2026-01-24T00:43:33.526244293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:33.545120 containerd[1467]: time="2026-01-24T00:43:33.533055090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:33.626460 systemd[1]: Started cri-containerd-6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b.scope - libcontainer container 6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b. Jan 24 00:43:33.697373 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:33.742184 systemd-networkd[1405]: calibed0514ec7c: Link UP Jan 24 00:43:33.742522 systemd-networkd[1405]: calibed0514ec7c: Gained carrier Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.202 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.261 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0 coredns-668d6bf9bc- kube-system e3c21f44-8ae7-42a7-a6ec-8b8562e76305 994 0 2026-01-24 00:42:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dzdbl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibed0514ec7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.262 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.419 [INFO][4172] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" HandleID="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.419 [INFO][4172] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" HandleID="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001384d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dzdbl", "timestamp":"2026-01-24 00:43:33.419042281 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.419 [INFO][4172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.419 [INFO][4172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.419 [INFO][4172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.480 [INFO][4172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.515 [INFO][4172] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.550 [INFO][4172] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.572 [INFO][4172] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.584 [INFO][4172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.584 [INFO][4172] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.598 [INFO][4172] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296 Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.637 [INFO][4172] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.698 [INFO][4172] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.698 [INFO][4172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" host="localhost" Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.698 [INFO][4172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:33.794310 containerd[1467]: 2026-01-24 00:43:33.698 [INFO][4172] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" HandleID="k8s-pod-network.3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.796463 containerd[1467]: 2026-01-24 00:43:33.733 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3c21f44-8ae7-42a7-a6ec-8b8562e76305", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dzdbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibed0514ec7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:33.796463 containerd[1467]: 2026-01-24 00:43:33.733 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.796463 containerd[1467]: 2026-01-24 00:43:33.733 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibed0514ec7c ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.796463 containerd[1467]: 2026-01-24 00:43:33.744 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.796463 containerd[1467]: 2026-01-24 00:43:33.745 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3c21f44-8ae7-42a7-a6ec-8b8562e76305", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296", Pod:"coredns-668d6bf9bc-dzdbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibed0514ec7c", MAC:"be:20:29:3e:52:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:33.796463 containerd[1467]: 2026-01-24 00:43:33.782 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzdbl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:33.803056 containerd[1467]: time="2026-01-24T00:43:33.801736104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7986b6bf7-c6tcd,Uid:56fae327-01cc-4cd0-849f-72d480e4300e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6949e1ee977ff8b2670455c93c65f76ffc1a4e6f715ffa8f640b5b7838e6f06b\"" Jan 24 00:43:33.814737 containerd[1467]: time="2026-01-24T00:43:33.814634907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:43:33.852649 containerd[1467]: time="2026-01-24T00:43:33.851368203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:33.852649 containerd[1467]: time="2026-01-24T00:43:33.851444825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:33.852649 containerd[1467]: time="2026-01-24T00:43:33.851467488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:33.852649 containerd[1467]: time="2026-01-24T00:43:33.851601437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.621 [INFO][4209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.628 [INFO][4209] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" iface="eth0" netns="/var/run/netns/cni-89322d9b-ca4c-b16d-65f4-fa7fff50ff33" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.632 [INFO][4209] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" iface="eth0" netns="/var/run/netns/cni-89322d9b-ca4c-b16d-65f4-fa7fff50ff33" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.635 [INFO][4209] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" iface="eth0" netns="/var/run/netns/cni-89322d9b-ca4c-b16d-65f4-fa7fff50ff33" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.637 [INFO][4209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.637 [INFO][4209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.815 [INFO][4268] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.816 [INFO][4268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.816 [INFO][4268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.835 [WARNING][4268] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.836 [INFO][4268] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.845 [INFO][4268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:33.867159 containerd[1467]: 2026-01-24 00:43:33.860 [INFO][4209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:33.879054 containerd[1467]: time="2026-01-24T00:43:33.878765221Z" level=info msg="TearDown network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" successfully" Jan 24 00:43:33.879054 containerd[1467]: time="2026-01-24T00:43:33.878842935Z" level=info msg="StopPodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" returns successfully" Jan 24 00:43:33.880456 containerd[1467]: time="2026-01-24T00:43:33.880428279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-65n6d,Uid:9becf02e-a8cd-4e6f-92b4-b46fa4218220,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:43:33.880519 systemd[1]: run-netns-cni\x2d89322d9b\x2dca4c\x2db16d\x2d65f4\x2dfa7fff50ff33.mount: Deactivated successfully. Jan 24 00:43:33.895147 containerd[1467]: time="2026-01-24T00:43:33.894574018Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:33.902693 systemd[1]: Started cri-containerd-3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296.scope - libcontainer container 3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296. Jan 24 00:43:33.927358 containerd[1467]: time="2026-01-24T00:43:33.900426115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:43:33.927485 containerd[1467]: time="2026-01-24T00:43:33.902650493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:43:33.928329 kubelet[2580]: E0124 00:43:33.927716 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:43:33.928329 kubelet[2580]: E0124 00:43:33.927791 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:43:33.932364 kubelet[2580]: E0124 00:43:33.931845 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6f87900357124d41bee7c9d9fda81593,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:33.939775 containerd[1467]: time="2026-01-24T00:43:33.939108267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:43:33.957681 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.729 [INFO][4230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.729 [INFO][4230] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" iface="eth0" netns="/var/run/netns/cni-6e96626e-9e6e-110c-71be-8e198fdab839" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.730 [INFO][4230] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" iface="eth0" netns="/var/run/netns/cni-6e96626e-9e6e-110c-71be-8e198fdab839" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.730 [INFO][4230] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" iface="eth0" netns="/var/run/netns/cni-6e96626e-9e6e-110c-71be-8e198fdab839" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.731 [INFO][4230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.731 [INFO][4230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.908 [INFO][4275] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.909 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.910 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.940 [WARNING][4275] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.940 [INFO][4275] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.964 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:33.992780 containerd[1467]: 2026-01-24 00:43:33.976 [INFO][4230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:33.997291 containerd[1467]: time="2026-01-24T00:43:33.996711890Z" level=info msg="TearDown network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" successfully" Jan 24 00:43:33.997291 containerd[1467]: time="2026-01-24T00:43:33.997159772Z" level=info msg="StopPodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" returns successfully" Jan 24 00:43:34.000616 containerd[1467]: time="2026-01-24T00:43:34.000441194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664696c7bc-cdlnv,Uid:d1ac7bc7-7591-48d6-8111-89103d85ee5f,Namespace:calico-system,Attempt:1,}" Jan 24 00:43:34.030689 containerd[1467]: time="2026-01-24T00:43:34.029785274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzdbl,Uid:e3c21f44-8ae7-42a7-a6ec-8b8562e76305,Namespace:kube-system,Attempt:1,} returns sandbox id \"3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296\"" Jan 24 00:43:34.033318 kubelet[2580]: E0124 00:43:34.032595 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.036435 containerd[1467]: time="2026-01-24T00:43:34.036386374Z" level=info msg="CreateContainer within sandbox \"3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:43:34.047093 containerd[1467]: time="2026-01-24T00:43:34.041808637Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:34.085133 containerd[1467]: time="2026-01-24T00:43:34.084752376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:43:34.085786 containerd[1467]: time="2026-01-24T00:43:34.085202468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:43:34.085830 kubelet[2580]: E0124 00:43:34.085319 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:43:34.085830 kubelet[2580]: E0124 00:43:34.085424 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:43:34.086113 kubelet[2580]: E0124 00:43:34.085575 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:34.087055 kubelet[2580]: E0124 00:43:34.086717 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:43:34.095215 systemd[1]: run-netns-cni\x2d6e96626e\x2d9e6e\x2d110c\x2d71be\x2d8e198fdab839.mount: Deactivated successfully. Jan 24 00:43:34.184647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785505325.mount: Deactivated successfully. Jan 24 00:43:34.215631 containerd[1467]: time="2026-01-24T00:43:34.210862645Z" level=info msg="CreateContainer within sandbox \"3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc66f907c9f3939b6235b3deda228aed37c23bc0427b0a729b0c7c30f30a1fa9\"" Jan 24 00:43:34.220885 containerd[1467]: time="2026-01-24T00:43:34.219619703Z" level=info msg="StartContainer for \"bc66f907c9f3939b6235b3deda228aed37c23bc0427b0a729b0c7c30f30a1fa9\"" Jan 24 00:43:34.244736 kubelet[2580]: E0124 00:43:34.244494 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:43:34.375763 systemd[1]: Started cri-containerd-bc66f907c9f3939b6235b3deda228aed37c23bc0427b0a729b0c7c30f30a1fa9.scope - libcontainer container bc66f907c9f3939b6235b3deda228aed37c23bc0427b0a729b0c7c30f30a1fa9. Jan 24 00:43:34.388332 systemd-networkd[1405]: vxlan.calico: Link UP Jan 24 00:43:34.389404 systemd-networkd[1405]: vxlan.calico: Gained carrier Jan 24 00:43:34.404602 containerd[1467]: time="2026-01-24T00:43:34.404117972Z" level=info msg="StopPodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\"" Jan 24 00:43:34.631765 systemd-networkd[1405]: cali769619e9882: Link UP Jan 24 00:43:34.634718 systemd-networkd[1405]: cali769619e9882: Gained carrier Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.070 [INFO][4334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0 calico-apiserver-6fc4d58c87- calico-apiserver 9becf02e-a8cd-4e6f-92b4-b46fa4218220 1003 0 2026-01-24 00:43:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc4d58c87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fc4d58c87-65n6d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali769619e9882 [] [] }} ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.072 [INFO][4334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.189 [INFO][4366] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" HandleID="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.193 [INFO][4366] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" HandleID="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000529360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fc4d58c87-65n6d", "timestamp":"2026-01-24 00:43:34.189416834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.193 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.194 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.195 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.243 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.303 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.359 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.366 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.436 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.436 [INFO][4366] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.462 [INFO][4366] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.566 [INFO][4366] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.601 [INFO][4366] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.601 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" host="localhost" Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.601 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:34.709453 containerd[1467]: 2026-01-24 00:43:34.601 [INFO][4366] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" HandleID="k8s-pod-network.1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.710426 containerd[1467]: 2026-01-24 00:43:34.619 [INFO][4334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"9becf02e-a8cd-4e6f-92b4-b46fa4218220", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fc4d58c87-65n6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali769619e9882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:34.710426 containerd[1467]: 2026-01-24 00:43:34.619 [INFO][4334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.710426 containerd[1467]: 2026-01-24 00:43:34.624 [INFO][4334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali769619e9882 ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.710426 containerd[1467]: 2026-01-24 00:43:34.636 [INFO][4334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.710426 containerd[1467]: 2026-01-24 00:43:34.637 [INFO][4334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"9becf02e-a8cd-4e6f-92b4-b46fa4218220", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f", Pod:"calico-apiserver-6fc4d58c87-65n6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali769619e9882", MAC:"62:22:44:f5:c7:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:34.710426 containerd[1467]: 2026-01-24 00:43:34.699 [INFO][4334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-65n6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:34.751398 containerd[1467]: time="2026-01-24T00:43:34.751168353Z" level=info msg="StartContainer for \"bc66f907c9f3939b6235b3deda228aed37c23bc0427b0a729b0c7c30f30a1fa9\" returns successfully" Jan 24 00:43:34.839563 containerd[1467]: time="2026-01-24T00:43:34.838988453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:34.842226 containerd[1467]: time="2026-01-24T00:43:34.841178872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:34.842226 containerd[1467]: time="2026-01-24T00:43:34.841211952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:34.842226 containerd[1467]: time="2026-01-24T00:43:34.841359086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:34.905367 systemd[1]: Started cri-containerd-1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f.scope - libcontainer container 1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f. Jan 24 00:43:34.958111 systemd-networkd[1405]: caliaa90c08ad1b: Link UP Jan 24 00:43:34.962830 systemd-networkd[1405]: caliaa90c08ad1b: Gained carrier Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.728 [INFO][4445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.728 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" iface="eth0" netns="/var/run/netns/cni-74d54275-2680-0dd4-ddcf-69f8b538aaa4" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.729 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" iface="eth0" netns="/var/run/netns/cni-74d54275-2680-0dd4-ddcf-69f8b538aaa4" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.730 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" iface="eth0" netns="/var/run/netns/cni-74d54275-2680-0dd4-ddcf-69f8b538aaa4" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.731 [INFO][4445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.731 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.898 [INFO][4489] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.906 [INFO][4489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.912 [INFO][4489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.942 [WARNING][4489] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.942 [INFO][4489] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.949 [INFO][4489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:34.969684 containerd[1467]: 2026-01-24 00:43:34.964 [INFO][4445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:34.970338 containerd[1467]: time="2026-01-24T00:43:34.970237825Z" level=info msg="TearDown network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" successfully" Jan 24 00:43:34.970338 containerd[1467]: time="2026-01-24T00:43:34.970268062Z" level=info msg="StopPodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" returns successfully" Jan 24 00:43:34.971524 kubelet[2580]: E0124 00:43:34.971299 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:34.974100 containerd[1467]: time="2026-01-24T00:43:34.973859622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mxrqc,Uid:d3c71020-1a24-4b11-83e5-a9fa3d70fc14,Namespace:kube-system,Attempt:1,}" Jan 24 00:43:34.989250 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.346 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0 calico-kube-controllers-664696c7bc- calico-system d1ac7bc7-7591-48d6-8111-89103d85ee5f 1006 0 2026-01-24 00:43:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:664696c7bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-664696c7bc-cdlnv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa90c08ad1b [] [] }} ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.346 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.659 [INFO][4416] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" HandleID="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.659 [INFO][4416] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" HandleID="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d5700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-664696c7bc-cdlnv", "timestamp":"2026-01-24 00:43:34.659277707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.659 [INFO][4416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.659 [INFO][4416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.659 [INFO][4416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.708 [INFO][4416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.735 [INFO][4416] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.772 [INFO][4416] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.794 [INFO][4416] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.809 [INFO][4416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.809 [INFO][4416] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.833 [INFO][4416] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9 Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.867 [INFO][4416] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.911 [INFO][4416] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.911 [INFO][4416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" host="localhost" Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.911 [INFO][4416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:35.026777 containerd[1467]: 2026-01-24 00:43:34.911 [INFO][4416] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" HandleID="k8s-pod-network.f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.027643 containerd[1467]: 2026-01-24 00:43:34.938 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0", GenerateName:"calico-kube-controllers-664696c7bc-", Namespace:"calico-system", SelfLink:"", UID:"d1ac7bc7-7591-48d6-8111-89103d85ee5f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664696c7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-664696c7bc-cdlnv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa90c08ad1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:35.027643 containerd[1467]: 2026-01-24 00:43:34.941 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.027643 containerd[1467]: 2026-01-24 00:43:34.941 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa90c08ad1b ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.027643 containerd[1467]: 2026-01-24 00:43:34.965 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.027643 containerd[1467]: 2026-01-24 00:43:34.966 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0", GenerateName:"calico-kube-controllers-664696c7bc-", Namespace:"calico-system", SelfLink:"", UID:"d1ac7bc7-7591-48d6-8111-89103d85ee5f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664696c7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9", Pod:"calico-kube-controllers-664696c7bc-cdlnv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa90c08ad1b", MAC:"d2:d1:12:b3:86:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:35.027643 containerd[1467]: 2026-01-24 00:43:35.008 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9" Namespace="calico-system" Pod="calico-kube-controllers-664696c7bc-cdlnv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:35.097667 systemd[1]: run-netns-cni\x2d74d54275\x2d2680\x2d0dd4\x2dddcf\x2d69f8b538aaa4.mount: Deactivated successfully. Jan 24 00:43:35.114695 systemd-networkd[1405]: calibed0514ec7c: Gained IPv6LL Jan 24 00:43:35.120495 containerd[1467]: time="2026-01-24T00:43:35.118119288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:35.120495 containerd[1467]: time="2026-01-24T00:43:35.118208204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:35.120495 containerd[1467]: time="2026-01-24T00:43:35.118227860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:35.120495 containerd[1467]: time="2026-01-24T00:43:35.118347632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:35.122262 containerd[1467]: time="2026-01-24T00:43:35.122209791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-65n6d,Uid:9becf02e-a8cd-4e6f-92b4-b46fa4218220,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f\"" Jan 24 00:43:35.169612 containerd[1467]: time="2026-01-24T00:43:35.169291349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:43:35.241526 systemd[1]: Started cri-containerd-f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9.scope - libcontainer container f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9. Jan 24 00:43:35.243363 systemd-networkd[1405]: calic1847b362dc: Gained IPv6LL Jan 24 00:43:35.269666 kubelet[2580]: E0124 00:43:35.269588 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:35.272620 kubelet[2580]: E0124 00:43:35.272410 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:43:35.297813 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:35.306054 containerd[1467]: time="2026-01-24T00:43:35.302307863Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:35.311393 containerd[1467]: time="2026-01-24T00:43:35.310032795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:43:35.311393 containerd[1467]: time="2026-01-24T00:43:35.310799326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:43:35.312682 kubelet[2580]: E0124 00:43:35.311736 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:35.312682 kubelet[2580]: E0124 00:43:35.311788 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:35.312682 kubelet[2580]: E0124 00:43:35.312240 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gjzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-65n6d_calico-apiserver(9becf02e-a8cd-4e6f-92b4-b46fa4218220): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:35.313629 kubelet[2580]: E0124 00:43:35.313478 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:43:35.385194 containerd[1467]: time="2026-01-24T00:43:35.384080192Z" level=info msg="StopPodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\"" Jan 24 00:43:35.403532 kubelet[2580]: I0124 00:43:35.403406 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dzdbl" podStartSLOduration=44.403377075 podStartE2EDuration="44.403377075s" podCreationTimestamp="2026-01-24 00:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:43:35.395646907 +0000 UTC m=+47.569848760" watchObservedRunningTime="2026-01-24 00:43:35.403377075 +0000 UTC m=+47.577578938" Jan 24 00:43:35.434386 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL Jan 24 00:43:35.608978 containerd[1467]: time="2026-01-24T00:43:35.608707470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664696c7bc-cdlnv,Uid:d1ac7bc7-7591-48d6-8111-89103d85ee5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9\"" Jan 24 00:43:35.629180 containerd[1467]: time="2026-01-24T00:43:35.628873851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:43:35.645373 systemd-networkd[1405]: cali7a32941bc4f: Link UP Jan 24 00:43:35.653280 systemd-networkd[1405]: cali7a32941bc4f: Gained carrier Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.256 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0 coredns-668d6bf9bc- kube-system d3c71020-1a24-4b11-83e5-a9fa3d70fc14 1031 0 2026-01-24 00:42:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mxrqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a32941bc4f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.256 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.380 [INFO][4598] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" HandleID="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.381 [INFO][4598] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" HandleID="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mxrqc", "timestamp":"2026-01-24 00:43:35.380847733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.381 [INFO][4598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.382 [INFO][4598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.382 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.441 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.474 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.514 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.529 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.556 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.556 [INFO][4598] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.565 [INFO][4598] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.579 [INFO][4598] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.614 [INFO][4598] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.614 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" host="localhost" Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.614 [INFO][4598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:35.708134 containerd[1467]: 2026-01-24 00:43:35.614 [INFO][4598] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" HandleID="k8s-pod-network.c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.621 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3c71020-1a24-4b11-83e5-a9fa3d70fc14", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mxrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a32941bc4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.624 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.627 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a32941bc4f ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.664 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.666 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3c71020-1a24-4b11-83e5-a9fa3d70fc14", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b", Pod:"coredns-668d6bf9bc-mxrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a32941bc4f", MAC:"3e:1a:56:6c:26:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.699 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b" Namespace="kube-system" Pod="coredns-668d6bf9bc-mxrqc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.573 [INFO][4619] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.576 [INFO][4619] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" iface="eth0" netns="/var/run/netns/cni-1c3996b4-993f-0402-43dc-514802bb39db" Jan 24 00:43:35.709035 containerd[1467]: 2026-01-24 00:43:35.577 [INFO][4619] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" iface="eth0" netns="/var/run/netns/cni-1c3996b4-993f-0402-43dc-514802bb39db" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.581 [INFO][4619] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" iface="eth0" netns="/var/run/netns/cni-1c3996b4-993f-0402-43dc-514802bb39db" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.582 [INFO][4619] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.582 [INFO][4619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.667 [INFO][4633] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.668 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.668 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.689 [WARNING][4633] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.689 [INFO][4633] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.694 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:35.714119 containerd[1467]: 2026-01-24 00:43:35.699 [INFO][4619] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:35.714119 containerd[1467]: time="2026-01-24T00:43:35.711594243Z" level=info msg="TearDown network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" successfully" Jan 24 00:43:35.714119 containerd[1467]: time="2026-01-24T00:43:35.711706702Z" level=info msg="StopPodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" returns successfully" Jan 24 00:43:35.716754 systemd[1]: run-netns-cni\x2d1c3996b4\x2d993f\x2d0402\x2d43dc\x2d514802bb39db.mount: Deactivated successfully. Jan 24 00:43:35.718374 containerd[1467]: time="2026-01-24T00:43:35.718243715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-h7n7g,Uid:6a3c8225-48cc-431d-9350-25407dc6fc7b,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:43:35.725731 containerd[1467]: time="2026-01-24T00:43:35.725363870Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:35.744436 containerd[1467]: time="2026-01-24T00:43:35.744315863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:43:35.745135 containerd[1467]: time="2026-01-24T00:43:35.744793485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:43:35.748052 kubelet[2580]: E0124 00:43:35.746831 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:43:35.748052 kubelet[2580]: E0124 00:43:35.747786 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:43:35.748253 kubelet[2580]: E0124 00:43:35.748186 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm8nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664696c7bc-cdlnv_calico-system(d1ac7bc7-7591-48d6-8111-89103d85ee5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:35.749985 kubelet[2580]: E0124 00:43:35.749682 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:43:35.785882 containerd[1467]: time="2026-01-24T00:43:35.785407338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:35.785882 containerd[1467]: time="2026-01-24T00:43:35.785497757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:35.785882 containerd[1467]: time="2026-01-24T00:43:35.785517073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:35.785882 containerd[1467]: time="2026-01-24T00:43:35.785646163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:35.828285 systemd[1]: Started cri-containerd-c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b.scope - libcontainer container c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b. Jan 24 00:43:35.862266 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:35.924330 containerd[1467]: time="2026-01-24T00:43:35.924146321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mxrqc,Uid:d3c71020-1a24-4b11-83e5-a9fa3d70fc14,Namespace:kube-system,Attempt:1,} returns sandbox id \"c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b\"" Jan 24 00:43:35.927746 kubelet[2580]: E0124 00:43:35.927562 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:35.937712 containerd[1467]: time="2026-01-24T00:43:35.936454494Z" level=info msg="CreateContainer within sandbox \"c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:43:35.990503 containerd[1467]: time="2026-01-24T00:43:35.990354421Z" level=info msg="CreateContainer within sandbox \"c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0df0a72f698b8ab8651b4b0b924fc5bf7d4082d65c578a10a23c2c2bc8d80056\"" Jan 24 00:43:35.991769 containerd[1467]: time="2026-01-24T00:43:35.991663232Z" level=info msg="StartContainer for \"0df0a72f698b8ab8651b4b0b924fc5bf7d4082d65c578a10a23c2c2bc8d80056\"" Jan 24 00:43:36.075560 systemd-networkd[1405]: cali769619e9882: Gained IPv6LL Jan 24 00:43:36.231527 systemd[1]: Started cri-containerd-0df0a72f698b8ab8651b4b0b924fc5bf7d4082d65c578a10a23c2c2bc8d80056.scope - libcontainer container 0df0a72f698b8ab8651b4b0b924fc5bf7d4082d65c578a10a23c2c2bc8d80056. Jan 24 00:43:36.301273 kubelet[2580]: E0124 00:43:36.298770 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:36.301273 kubelet[2580]: E0124 00:43:36.300597 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:43:36.302111 kubelet[2580]: E0124 00:43:36.301306 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:43:36.319079 systemd-networkd[1405]: cali1b55179886a: Link UP Jan 24 00:43:36.319850 systemd-networkd[1405]: cali1b55179886a: Gained carrier Jan 24 00:43:36.329512 containerd[1467]: time="2026-01-24T00:43:36.328699810Z" level=info msg="StartContainer for \"0df0a72f698b8ab8651b4b0b924fc5bf7d4082d65c578a10a23c2c2bc8d80056\" returns successfully" Jan 24 00:43:36.383428 containerd[1467]: time="2026-01-24T00:43:36.383330415Z" level=info msg="StopPodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\"" Jan 24 00:43:36.401232 containerd[1467]: time="2026-01-24T00:43:36.400882945Z" level=info msg="StopPodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\"" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.891 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0 calico-apiserver-6fc4d58c87- calico-apiserver 6a3c8225-48cc-431d-9350-25407dc6fc7b 1054 0 2026-01-24 00:43:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc4d58c87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fc4d58c87-h7n7g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b55179886a [] [] }} ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.893 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.987 [INFO][4741] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" HandleID="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.989 [INFO][4741] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" HandleID="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fc4d58c87-h7n7g", "timestamp":"2026-01-24 00:43:35.987806932 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.990 [INFO][4741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.991 [INFO][4741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:35.991 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.169 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.203 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.225 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.235 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.243 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.243 [INFO][4741] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.248 [INFO][4741] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558 Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.266 [INFO][4741] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.291 [INFO][4741] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.292 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" host="localhost" Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.292 [INFO][4741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:36.480716 containerd[1467]: 2026-01-24 00:43:36.292 [INFO][4741] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" HandleID="k8s-pod-network.f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.482514 containerd[1467]: 2026-01-24 00:43:36.297 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a3c8225-48cc-431d-9350-25407dc6fc7b", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fc4d58c87-h7n7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b55179886a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:36.482514 containerd[1467]: 2026-01-24 00:43:36.298 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.482514 containerd[1467]: 2026-01-24 00:43:36.298 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b55179886a ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.482514 containerd[1467]: 2026-01-24 00:43:36.331 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.482514 containerd[1467]: 2026-01-24 00:43:36.334 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a3c8225-48cc-431d-9350-25407dc6fc7b", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558", Pod:"calico-apiserver-6fc4d58c87-h7n7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b55179886a", MAC:"12:40:4e:3b:8a:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:36.482514 containerd[1467]: 2026-01-24 00:43:36.427 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4d58c87-h7n7g" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:36.735872 containerd[1467]: time="2026-01-24T00:43:36.731255325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:36.735872 containerd[1467]: time="2026-01-24T00:43:36.731387230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:36.735872 containerd[1467]: time="2026-01-24T00:43:36.731404372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:36.735872 containerd[1467]: time="2026-01-24T00:43:36.731516461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:36.907203 systemd-networkd[1405]: caliaa90c08ad1b: Gained IPv6LL Jan 24 00:43:36.909242 systemd[1]: Started cri-containerd-f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558.scope - libcontainer container f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558. Jan 24 00:43:37.032785 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.815 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.815 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" iface="eth0" netns="/var/run/netns/cni-fafd3ba2-8a18-ac7d-d7d2-b8614219e485" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.821 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" iface="eth0" netns="/var/run/netns/cni-fafd3ba2-8a18-ac7d-d7d2-b8614219e485" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.824 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" iface="eth0" netns="/var/run/netns/cni-fafd3ba2-8a18-ac7d-d7d2-b8614219e485" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.825 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.825 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.980 [INFO][4866] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.980 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:36.980 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:37.012 [WARNING][4866] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:37.012 [INFO][4866] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:37.026 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:37.078861 containerd[1467]: 2026-01-24 00:43:37.044 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:37.083176 containerd[1467]: time="2026-01-24T00:43:37.082975276Z" level=info msg="TearDown network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" successfully" Jan 24 00:43:37.083176 containerd[1467]: time="2026-01-24T00:43:37.083167494Z" level=info msg="StopPodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" returns successfully" Jan 24 00:43:37.084272 containerd[1467]: time="2026-01-24T00:43:37.084241497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p7p9p,Uid:aa23a976-feaf-4984-bbe7-f5e048e9da19,Namespace:calico-system,Attempt:1,}" Jan 24 00:43:37.116481 systemd[1]: run-containerd-runc-k8s.io-f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558-runc.qO4cAm.mount: Deactivated successfully. Jan 24 00:43:37.116785 systemd[1]: run-netns-cni\x2dfafd3ba2\x2d8a18\x2dac7d\x2dd7d2\x2db8614219e485.mount: Deactivated successfully. Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.911 [INFO][4814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.911 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" iface="eth0" netns="/var/run/netns/cni-089fb099-981e-561f-0850-30dd8f9dbc31" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.911 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" iface="eth0" netns="/var/run/netns/cni-089fb099-981e-561f-0850-30dd8f9dbc31" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.915 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" iface="eth0" netns="/var/run/netns/cni-089fb099-981e-561f-0850-30dd8f9dbc31" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.915 [INFO][4814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.915 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.992 [INFO][4885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:36.992 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:37.033 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:37.070 [WARNING][4885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:37.070 [INFO][4885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:37.082 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:37.121166 containerd[1467]: 2026-01-24 00:43:37.092 [INFO][4814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:37.126317 containerd[1467]: time="2026-01-24T00:43:37.121769892Z" level=info msg="TearDown network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" successfully" Jan 24 00:43:37.126317 containerd[1467]: time="2026-01-24T00:43:37.122360229Z" level=info msg="StopPodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" returns successfully" Jan 24 00:43:37.129834 containerd[1467]: time="2026-01-24T00:43:37.128719057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xzhgv,Uid:3935770d-1f88-434a-a13a-250f66f25ebf,Namespace:calico-system,Attempt:1,}" Jan 24 00:43:37.131239 systemd[1]: run-netns-cni\x2d089fb099\x2d981e\x2d561f\x2d0850\x2d30dd8f9dbc31.mount: Deactivated successfully. Jan 24 00:43:37.312230 containerd[1467]: time="2026-01-24T00:43:37.311440674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4d58c87-h7n7g,Uid:6a3c8225-48cc-431d-9350-25407dc6fc7b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558\"" Jan 24 00:43:37.329559 containerd[1467]: time="2026-01-24T00:43:37.325631784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:43:37.329707 kubelet[2580]: E0124 00:43:37.329062 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:37.329707 kubelet[2580]: E0124 00:43:37.329427 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:37.399150 kubelet[2580]: E0124 00:43:37.398854 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:43:37.510616 kubelet[2580]: I0124 00:43:37.504873 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mxrqc" podStartSLOduration=46.504847827 podStartE2EDuration="46.504847827s" podCreationTimestamp="2026-01-24 00:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:43:37.4950911 +0000 UTC m=+49.669292983" watchObservedRunningTime="2026-01-24 00:43:37.504847827 +0000 UTC m=+49.679049681" Jan 24 00:43:37.549498 systemd-networkd[1405]: cali7a32941bc4f: Gained IPv6LL Jan 24 00:43:37.607149 containerd[1467]: time="2026-01-24T00:43:37.605699588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:37.616646 containerd[1467]: time="2026-01-24T00:43:37.613962086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:43:37.616646 containerd[1467]: time="2026-01-24T00:43:37.614121171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:43:37.620564 kubelet[2580]: E0124 00:43:37.619866 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:37.620564 kubelet[2580]: E0124 00:43:37.620432 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:37.622471 kubelet[2580]: E0124 00:43:37.622065 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qhtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-h7n7g_calico-apiserver(6a3c8225-48cc-431d-9350-25407dc6fc7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:37.624092 kubelet[2580]: E0124 00:43:37.624041 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:43:37.802973 systemd-networkd[1405]: cali1b55179886a: Gained IPv6LL Jan 24 00:43:38.021113 systemd-networkd[1405]: cali76d9f27d1a7: Link UP Jan 24 00:43:38.026869 systemd-networkd[1405]: cali76d9f27d1a7: Gained carrier Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.536 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--p7p9p-eth0 goldmane-666569f655- calico-system aa23a976-feaf-4984-bbe7-f5e048e9da19 1085 0 2026-01-24 00:43:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-p7p9p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76d9f27d1a7 [] [] }} ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.536 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.728 [INFO][4936] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" HandleID="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.728 [INFO][4936] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" HandleID="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-p7p9p", "timestamp":"2026-01-24 00:43:37.728498099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.728 [INFO][4936] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.728 [INFO][4936] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.728 [INFO][4936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.765 [INFO][4936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.816 [INFO][4936] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.848 [INFO][4936] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.874 [INFO][4936] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.883 [INFO][4936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.883 [INFO][4936] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.891 [INFO][4936] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6 Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:37.961 [INFO][4936] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:38.001 [INFO][4936] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:38.001 [INFO][4936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" host="localhost" Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:38.001 [INFO][4936] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:38.099093 containerd[1467]: 2026-01-24 00:43:38.001 [INFO][4936] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" HandleID="k8s-pod-network.a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.115118 containerd[1467]: 2026-01-24 00:43:38.007 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p7p9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa23a976-feaf-4984-bbe7-f5e048e9da19", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-p7p9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76d9f27d1a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:38.115118 containerd[1467]: 2026-01-24 00:43:38.007 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.115118 containerd[1467]: 2026-01-24 00:43:38.007 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76d9f27d1a7 ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.115118 containerd[1467]: 2026-01-24 00:43:38.034 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.115118 containerd[1467]: 2026-01-24 00:43:38.039 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p7p9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa23a976-feaf-4984-bbe7-f5e048e9da19", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6", Pod:"goldmane-666569f655-p7p9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76d9f27d1a7", MAC:"46:3a:19:49:03:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:38.115118 containerd[1467]: 2026-01-24 00:43:38.093 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6" Namespace="calico-system" Pod="goldmane-666569f655-p7p9p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:38.223762 systemd-networkd[1405]: califcb97284f85: Link UP Jan 24 00:43:38.224393 systemd-networkd[1405]: califcb97284f85: Gained carrier Jan 24 00:43:38.239853 containerd[1467]: time="2026-01-24T00:43:38.236866142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:38.239853 containerd[1467]: time="2026-01-24T00:43:38.237095208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:38.239853 containerd[1467]: time="2026-01-24T00:43:38.237120956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:38.272050 containerd[1467]: time="2026-01-24T00:43:38.271145408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:38.355688 kubelet[2580]: E0124 00:43:38.355649 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:38.361125 kubelet[2580]: E0124 00:43:38.360228 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:43:38.378205 systemd[1]: Started cri-containerd-a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6.scope - libcontainer container a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6. Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:37.639 [INFO][4917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xzhgv-eth0 csi-node-driver- calico-system 3935770d-1f88-434a-a13a-250f66f25ebf 1086 0 2026-01-24 00:43:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xzhgv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califcb97284f85 [] [] }} ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:37.639 [INFO][4917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:37.781 [INFO][4944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" HandleID="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:37.785 [INFO][4944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" HandleID="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040b1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xzhgv", "timestamp":"2026-01-24 00:43:37.781433289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:37.785 [INFO][4944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.002 [INFO][4944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.003 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.021 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.044 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.091 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.139 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.153 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.154 [INFO][4944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.159 [INFO][4944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6 Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.175 [INFO][4944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.208 [INFO][4944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.208 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" host="localhost" Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.208 [INFO][4944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:38.394838 containerd[1467]: 2026-01-24 00:43:38.208 [INFO][4944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" HandleID="k8s-pod-network.a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.397495 containerd[1467]: 2026-01-24 00:43:38.214 [INFO][4917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xzhgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3935770d-1f88-434a-a13a-250f66f25ebf", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xzhgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califcb97284f85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:38.397495 containerd[1467]: 2026-01-24 00:43:38.216 [INFO][4917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.397495 containerd[1467]: 2026-01-24 00:43:38.216 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcb97284f85 ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.397495 containerd[1467]: 2026-01-24 00:43:38.225 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.397495 containerd[1467]: 2026-01-24 00:43:38.231 [INFO][4917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xzhgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3935770d-1f88-434a-a13a-250f66f25ebf", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6", Pod:"csi-node-driver-xzhgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califcb97284f85", MAC:"62:91:ef:95:e1:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:38.397495 containerd[1467]: 2026-01-24 00:43:38.379 [INFO][4917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6" Namespace="calico-system" Pod="csi-node-driver-xzhgv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:38.488494 containerd[1467]: time="2026-01-24T00:43:38.484171941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:43:38.488494 containerd[1467]: time="2026-01-24T00:43:38.484272237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:43:38.488494 containerd[1467]: time="2026-01-24T00:43:38.484316289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:38.488494 containerd[1467]: time="2026-01-24T00:43:38.484462871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:43:38.496621 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:38.593729 systemd[1]: Started cri-containerd-a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6.scope - libcontainer container a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6. Jan 24 00:43:38.685684 containerd[1467]: time="2026-01-24T00:43:38.685626790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p7p9p,Uid:aa23a976-feaf-4984-bbe7-f5e048e9da19,Namespace:calico-system,Attempt:1,} returns sandbox id \"a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6\"" Jan 24 00:43:38.697763 containerd[1467]: time="2026-01-24T00:43:38.691642277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:43:38.702792 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:43:38.780372 containerd[1467]: time="2026-01-24T00:43:38.779767030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xzhgv,Uid:3935770d-1f88-434a-a13a-250f66f25ebf,Namespace:calico-system,Attempt:1,} returns sandbox id \"a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6\"" Jan 24 00:43:38.796267 containerd[1467]: time="2026-01-24T00:43:38.795686603Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:38.802267 containerd[1467]: time="2026-01-24T00:43:38.801216190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:43:38.802267 containerd[1467]: time="2026-01-24T00:43:38.801381618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:43:38.804724 kubelet[2580]: E0124 00:43:38.801562 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:43:38.804724 kubelet[2580]: E0124 00:43:38.801613 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:43:38.804724 kubelet[2580]: E0124 00:43:38.801868 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78mwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p7p9p_calico-system(aa23a976-feaf-4984-bbe7-f5e048e9da19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:38.804724 kubelet[2580]: E0124 00:43:38.803503 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:43:38.805224 containerd[1467]: time="2026-01-24T00:43:38.803696999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:43:38.877399 containerd[1467]: time="2026-01-24T00:43:38.875226302Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:38.878520 containerd[1467]: time="2026-01-24T00:43:38.878298611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:43:38.878520 containerd[1467]: time="2026-01-24T00:43:38.878426368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:43:38.879790 kubelet[2580]: E0124 00:43:38.879169 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:43:38.879790 kubelet[2580]: E0124 00:43:38.879283 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:43:38.879790 kubelet[2580]: E0124 00:43:38.879465 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:38.885485 containerd[1467]: time="2026-01-24T00:43:38.885358057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:43:38.968729 containerd[1467]: time="2026-01-24T00:43:38.967513683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:38.977543 containerd[1467]: time="2026-01-24T00:43:38.977314130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:43:38.977543 containerd[1467]: time="2026-01-24T00:43:38.977434865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:43:38.979393 kubelet[2580]: E0124 00:43:38.978400 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:43:38.979393 kubelet[2580]: E0124 00:43:38.978467 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:43:38.979393 kubelet[2580]: E0124 00:43:38.978791 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:38.985122 kubelet[2580]: E0124 00:43:38.980108 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:39.385190 kubelet[2580]: E0124 00:43:39.383311 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:43:39.389625 kubelet[2580]: E0124 00:43:39.387239 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:43:39.391276 kubelet[2580]: E0124 00:43:39.387640 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:43:39.391866 kubelet[2580]: E0124 00:43:39.391762 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:39.404724 systemd-networkd[1405]: califcb97284f85: Gained IPv6LL Jan 24 00:43:40.044344 systemd-networkd[1405]: cali76d9f27d1a7: Gained IPv6LL Jan 24 00:43:40.399348 kubelet[2580]: E0124 00:43:40.397538 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:43:40.404710 kubelet[2580]: E0124 00:43:40.404659 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:47.386748 containerd[1467]: time="2026-01-24T00:43:47.386451845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:43:47.484642 containerd[1467]: time="2026-01-24T00:43:47.484518812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:47.488408 containerd[1467]: time="2026-01-24T00:43:47.488292162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:43:47.488556 containerd[1467]: time="2026-01-24T00:43:47.488354378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:43:47.488992 kubelet[2580]: E0124 00:43:47.488809 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:43:47.488992 kubelet[2580]: E0124 00:43:47.488959 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:43:47.489609 kubelet[2580]: E0124 00:43:47.489205 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6f87900357124d41bee7c9d9fda81593,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:47.494512 containerd[1467]: time="2026-01-24T00:43:47.494475440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:43:47.565136 containerd[1467]: time="2026-01-24T00:43:47.564870078Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:47.570486 containerd[1467]: time="2026-01-24T00:43:47.570170135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:43:47.573523 containerd[1467]: time="2026-01-24T00:43:47.570531161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:43:47.573682 kubelet[2580]: E0124 00:43:47.571764 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:43:47.573682 kubelet[2580]: E0124 00:43:47.571999 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:43:47.573682 kubelet[2580]: E0124 00:43:47.572225 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:47.575850 kubelet[2580]: E0124 00:43:47.575776 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:43:48.314711 containerd[1467]: time="2026-01-24T00:43:48.314523201Z" level=info msg="StopPodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\"" Jan 24 00:43:48.385729 containerd[1467]: time="2026-01-24T00:43:48.385607612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:43:48.479503 containerd[1467]: time="2026-01-24T00:43:48.478790417Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:48.496833 containerd[1467]: time="2026-01-24T00:43:48.496365128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:43:48.496833 containerd[1467]: time="2026-01-24T00:43:48.496388020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:43:48.498006 kubelet[2580]: E0124 00:43:48.497575 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:48.498006 kubelet[2580]: E0124 00:43:48.497802 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:48.498536 kubelet[2580]: E0124 00:43:48.498161 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gjzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-65n6d_calico-apiserver(9becf02e-a8cd-4e6f-92b4-b46fa4218220): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:48.499982 kubelet[2580]: E0124 00:43:48.499663 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.445 [WARNING][5088] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0", GenerateName:"calico-kube-controllers-664696c7bc-", Namespace:"calico-system", SelfLink:"", UID:"d1ac7bc7-7591-48d6-8111-89103d85ee5f", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664696c7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9", Pod:"calico-kube-controllers-664696c7bc-cdlnv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa90c08ad1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.446 [INFO][5088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.446 [INFO][5088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" iface="eth0" netns="" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.446 [INFO][5088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.446 [INFO][5088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.517 [INFO][5100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.518 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.518 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.528 [WARNING][5100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.529 [INFO][5100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.533 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:48.541410 containerd[1467]: 2026-01-24 00:43:48.537 [INFO][5088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.541410 containerd[1467]: time="2026-01-24T00:43:48.541373068Z" level=info msg="TearDown network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" successfully" Jan 24 00:43:48.541410 containerd[1467]: time="2026-01-24T00:43:48.541405699Z" level=info msg="StopPodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" returns successfully" Jan 24 00:43:48.547303 containerd[1467]: time="2026-01-24T00:43:48.543151565Z" level=info msg="RemovePodSandbox for \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\"" Jan 24 00:43:48.547303 containerd[1467]: time="2026-01-24T00:43:48.546307375Z" level=info msg="Forcibly stopping sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\"" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.682 [WARNING][5117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0", GenerateName:"calico-kube-controllers-664696c7bc-", Namespace:"calico-system", SelfLink:"", UID:"d1ac7bc7-7591-48d6-8111-89103d85ee5f", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664696c7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f07fe870f6024e75ec0e456d71c8598541ca33a6ab2d23e732e022d07af064d9", Pod:"calico-kube-controllers-664696c7bc-cdlnv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa90c08ad1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.682 [INFO][5117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.682 [INFO][5117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" iface="eth0" netns="" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.682 [INFO][5117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.682 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.748 [INFO][5128] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.748 [INFO][5128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.748 [INFO][5128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.765 [WARNING][5128] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.765 [INFO][5128] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" HandleID="k8s-pod-network.4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Workload="localhost-k8s-calico--kube--controllers--664696c7bc--cdlnv-eth0" Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.770 [INFO][5128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:48.783678 containerd[1467]: 2026-01-24 00:43:48.774 [INFO][5117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028" Jan 24 00:43:48.783678 containerd[1467]: time="2026-01-24T00:43:48.779879527Z" level=info msg="TearDown network for sandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" successfully" Jan 24 00:43:48.806518 containerd[1467]: time="2026-01-24T00:43:48.805395910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:48.806518 containerd[1467]: time="2026-01-24T00:43:48.805539818Z" level=info msg="RemovePodSandbox \"4d26ec824a54e73e9fdf9ea87f1c53a1a8465ee8835eedca29c73fe0bd1c5028\" returns successfully" Jan 24 00:43:48.808310 containerd[1467]: time="2026-01-24T00:43:48.807761773Z" level=info msg="StopPodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\"" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:48.929 [WARNING][5146] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xzhgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3935770d-1f88-434a-a13a-250f66f25ebf", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6", Pod:"csi-node-driver-xzhgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califcb97284f85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:48.930 [INFO][5146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:48.930 [INFO][5146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" iface="eth0" netns="" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:48.930 [INFO][5146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:48.930 [INFO][5146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.001 [INFO][5154] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.002 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.002 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.088 [WARNING][5154] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.089 [INFO][5154] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.095 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:49.103412 containerd[1467]: 2026-01-24 00:43:49.099 [INFO][5146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.104734 containerd[1467]: time="2026-01-24T00:43:49.103763273Z" level=info msg="TearDown network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" successfully" Jan 24 00:43:49.104734 containerd[1467]: time="2026-01-24T00:43:49.103868959Z" level=info msg="StopPodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" returns successfully" Jan 24 00:43:49.109131 containerd[1467]: time="2026-01-24T00:43:49.108816757Z" level=info msg="RemovePodSandbox for \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\"" Jan 24 00:43:49.109322 containerd[1467]: time="2026-01-24T00:43:49.109138195Z" level=info msg="Forcibly stopping sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\"" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.206 [WARNING][5172] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xzhgv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3935770d-1f88-434a-a13a-250f66f25ebf", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7806a61393e4f67ae92829a175ca83351a72fccfa09f98ec172ae9c416b09e6", Pod:"csi-node-driver-xzhgv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califcb97284f85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.206 [INFO][5172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.206 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" iface="eth0" netns="" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.206 [INFO][5172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.206 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.263 [INFO][5180] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.263 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.263 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.282 [WARNING][5180] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.283 [INFO][5180] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" HandleID="k8s-pod-network.e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Workload="localhost-k8s-csi--node--driver--xzhgv-eth0" Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.287 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:49.296500 containerd[1467]: 2026-01-24 00:43:49.292 [INFO][5172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7" Jan 24 00:43:49.296500 containerd[1467]: time="2026-01-24T00:43:49.296412941Z" level=info msg="TearDown network for sandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" successfully" Jan 24 00:43:49.309647 containerd[1467]: time="2026-01-24T00:43:49.309510011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:49.309806 containerd[1467]: time="2026-01-24T00:43:49.309656193Z" level=info msg="RemovePodSandbox \"e2f805a58c0b8300d37beab861d83bd4cf21918588f744656ccf5540cf406ee7\" returns successfully" Jan 24 00:43:49.311544 containerd[1467]: time="2026-01-24T00:43:49.311459280Z" level=info msg="StopPodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\"" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.530 [WARNING][5198] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a3c8225-48cc-431d-9350-25407dc6fc7b", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558", Pod:"calico-apiserver-6fc4d58c87-h7n7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b55179886a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.532 [INFO][5198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.532 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" iface="eth0" netns="" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.532 [INFO][5198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.532 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.612 [INFO][5207] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.612 [INFO][5207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.612 [INFO][5207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.625 [WARNING][5207] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.625 [INFO][5207] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.631 [INFO][5207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:49.639134 containerd[1467]: 2026-01-24 00:43:49.635 [INFO][5198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.640293 containerd[1467]: time="2026-01-24T00:43:49.640135289Z" level=info msg="TearDown network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" successfully" Jan 24 00:43:49.640293 containerd[1467]: time="2026-01-24T00:43:49.640172628Z" level=info msg="StopPodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" returns successfully" Jan 24 00:43:49.641187 containerd[1467]: time="2026-01-24T00:43:49.640994222Z" level=info msg="RemovePodSandbox for \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\"" Jan 24 00:43:49.641187 containerd[1467]: time="2026-01-24T00:43:49.641022073Z" level=info msg="Forcibly stopping sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\"" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.777 [WARNING][5224] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a3c8225-48cc-431d-9350-25407dc6fc7b", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7cbe0154aaea83d79103e19db537887704020e6c82007b83898502e00529558", Pod:"calico-apiserver-6fc4d58c87-h7n7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b55179886a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.777 [INFO][5224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.777 [INFO][5224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" iface="eth0" netns="" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.777 [INFO][5224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.777 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.881 [INFO][5233] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.881 [INFO][5233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.881 [INFO][5233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.895 [WARNING][5233] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.895 [INFO][5233] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" HandleID="k8s-pod-network.6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--h7n7g-eth0" Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.900 [INFO][5233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:49.909500 containerd[1467]: 2026-01-24 00:43:49.905 [INFO][5224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf" Jan 24 00:43:49.909500 containerd[1467]: time="2026-01-24T00:43:49.909324512Z" level=info msg="TearDown network for sandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" successfully" Jan 24 00:43:49.925361 containerd[1467]: time="2026-01-24T00:43:49.925193134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:49.925641 containerd[1467]: time="2026-01-24T00:43:49.925386954Z" level=info msg="RemovePodSandbox \"6849869a6bb415d06108a6f1bee893ad08068cf779970827bd3649c676569eaf\" returns successfully" Jan 24 00:43:49.926518 containerd[1467]: time="2026-01-24T00:43:49.926302127Z" level=info msg="StopPodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\"" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.090 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"9becf02e-a8cd-4e6f-92b4-b46fa4218220", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f", Pod:"calico-apiserver-6fc4d58c87-65n6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali769619e9882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.094 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.094 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" iface="eth0" netns="" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.094 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.094 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.167 [INFO][5259] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.167 [INFO][5259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.167 [INFO][5259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.178 [WARNING][5259] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.179 [INFO][5259] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.184 [INFO][5259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:50.194281 containerd[1467]: 2026-01-24 00:43:50.189 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.194281 containerd[1467]: time="2026-01-24T00:43:50.193562606Z" level=info msg="TearDown network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" successfully" Jan 24 00:43:50.194281 containerd[1467]: time="2026-01-24T00:43:50.193594697Z" level=info msg="StopPodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" returns successfully" Jan 24 00:43:50.195676 containerd[1467]: time="2026-01-24T00:43:50.195419611Z" level=info msg="RemovePodSandbox for \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\"" Jan 24 00:43:50.195676 containerd[1467]: time="2026-01-24T00:43:50.195453373Z" level=info msg="Forcibly stopping sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\"" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.303 [WARNING][5276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0", GenerateName:"calico-apiserver-6fc4d58c87-", Namespace:"calico-apiserver", SelfLink:"", UID:"9becf02e-a8cd-4e6f-92b4-b46fa4218220", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4d58c87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1612202635131e50e01e7ca75c6c16078fdd299ceefaca37fd19445e87777f4f", Pod:"calico-apiserver-6fc4d58c87-65n6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali769619e9882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.304 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.304 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" iface="eth0" netns="" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.304 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.304 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.390 [INFO][5284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.391 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.391 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.406 [WARNING][5284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.406 [INFO][5284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" HandleID="k8s-pod-network.c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Workload="localhost-k8s-calico--apiserver--6fc4d58c87--65n6d-eth0" Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.420 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:50.440792 containerd[1467]: 2026-01-24 00:43:50.431 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046" Jan 24 00:43:50.440792 containerd[1467]: time="2026-01-24T00:43:50.439315332Z" level=info msg="TearDown network for sandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" successfully" Jan 24 00:43:50.458732 containerd[1467]: time="2026-01-24T00:43:50.458472624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:50.458880 containerd[1467]: time="2026-01-24T00:43:50.458635457Z" level=info msg="RemovePodSandbox \"c6f41142597a48247089e6e883d6411eab0bc3083168543a8fe221e6dd186046\" returns successfully" Jan 24 00:43:50.467004 containerd[1467]: time="2026-01-24T00:43:50.459422701Z" level=info msg="StopPodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\"" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.592 [WARNING][5301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3c71020-1a24-4b11-83e5-a9fa3d70fc14", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b", Pod:"coredns-668d6bf9bc-mxrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a32941bc4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.592 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.594 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" iface="eth0" netns="" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.594 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.594 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.690 [INFO][5310] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.690 [INFO][5310] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.690 [INFO][5310] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.708 [WARNING][5310] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.709 [INFO][5310] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.714 [INFO][5310] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:50.725763 containerd[1467]: 2026-01-24 00:43:50.721 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.725763 containerd[1467]: time="2026-01-24T00:43:50.725514810Z" level=info msg="TearDown network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" successfully" Jan 24 00:43:50.725763 containerd[1467]: time="2026-01-24T00:43:50.725548784Z" level=info msg="StopPodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" returns successfully" Jan 24 00:43:50.727238 containerd[1467]: time="2026-01-24T00:43:50.726776347Z" level=info msg="RemovePodSandbox for \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\"" Jan 24 00:43:50.727238 containerd[1467]: time="2026-01-24T00:43:50.726809828Z" level=info msg="Forcibly stopping sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\"" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.793 [WARNING][5328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d3c71020-1a24-4b11-83e5-a9fa3d70fc14", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2ecd3d66b5bd887c547369bda136ed7c4f50e11fb2d4a0c0433618d77e5f92b", Pod:"coredns-668d6bf9bc-mxrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a32941bc4f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.794 [INFO][5328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.794 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" iface="eth0" netns="" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.794 [INFO][5328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.794 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.874 [INFO][5337] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.875 [INFO][5337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.876 [INFO][5337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.933 [WARNING][5337] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.934 [INFO][5337] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" HandleID="k8s-pod-network.152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Workload="localhost-k8s-coredns--668d6bf9bc--mxrqc-eth0" Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.945 [INFO][5337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:50.965130 containerd[1467]: 2026-01-24 00:43:50.958 [INFO][5328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196" Jan 24 00:43:50.967255 containerd[1467]: time="2026-01-24T00:43:50.965368198Z" level=info msg="TearDown network for sandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" successfully" Jan 24 00:43:50.978431 containerd[1467]: time="2026-01-24T00:43:50.978223774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:50.978431 containerd[1467]: time="2026-01-24T00:43:50.978299565Z" level=info msg="RemovePodSandbox \"152c2de30bb317b11deb267d499d048b92d9ff8673cd8898ea823053ef828196\" returns successfully" Jan 24 00:43:50.984373 containerd[1467]: time="2026-01-24T00:43:50.984248327Z" level=info msg="StopPodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\"" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.107 [WARNING][5355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p7p9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa23a976-feaf-4984-bbe7-f5e048e9da19", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6", Pod:"goldmane-666569f655-p7p9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76d9f27d1a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.108 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.108 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" iface="eth0" netns="" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.108 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.108 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.168 [INFO][5364] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.168 [INFO][5364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.168 [INFO][5364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.184 [WARNING][5364] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.185 [INFO][5364] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.190 [INFO][5364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:51.200306 containerd[1467]: 2026-01-24 00:43:51.196 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.200306 containerd[1467]: time="2026-01-24T00:43:51.200278488Z" level=info msg="TearDown network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" successfully" Jan 24 00:43:51.200306 containerd[1467]: time="2026-01-24T00:43:51.200309195Z" level=info msg="StopPodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" returns successfully" Jan 24 00:43:51.202259 containerd[1467]: time="2026-01-24T00:43:51.202008511Z" level=info msg="RemovePodSandbox for \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\"" Jan 24 00:43:51.202259 containerd[1467]: time="2026-01-24T00:43:51.202129027Z" level=info msg="Forcibly stopping sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\"" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.296 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p7p9p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa23a976-feaf-4984-bbe7-f5e048e9da19", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4928b600dcd76ecf587f1c3ac461c19e00757db708d991626a21d4f1e57b4b6", Pod:"goldmane-666569f655-p7p9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76d9f27d1a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.297 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.297 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" iface="eth0" netns="" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.297 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.297 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.374 [INFO][5390] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.375 [INFO][5390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.375 [INFO][5390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.391 [WARNING][5390] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.391 [INFO][5390] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" HandleID="k8s-pod-network.585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Workload="localhost-k8s-goldmane--666569f655--p7p9p-eth0" Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.397 [INFO][5390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:51.406187 containerd[1467]: 2026-01-24 00:43:51.400 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62" Jan 24 00:43:51.406851 containerd[1467]: time="2026-01-24T00:43:51.406231403Z" level=info msg="TearDown network for sandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" successfully" Jan 24 00:43:51.418519 containerd[1467]: time="2026-01-24T00:43:51.418374550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:51.418684 containerd[1467]: time="2026-01-24T00:43:51.418540227Z" level=info msg="RemovePodSandbox \"585960e99554da119bcaa2456153c6a1e3eba20f08ade66b5b95c05b801c9e62\" returns successfully" Jan 24 00:43:51.420285 containerd[1467]: time="2026-01-24T00:43:51.420113724Z" level=info msg="StopPodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\"" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.493 [WARNING][5408] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3c21f44-8ae7-42a7-a6ec-8b8562e76305", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296", Pod:"coredns-668d6bf9bc-dzdbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibed0514ec7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.493 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.493 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" iface="eth0" netns="" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.493 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.493 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.541 [INFO][5416] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.542 [INFO][5416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.542 [INFO][5416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.557 [WARNING][5416] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.558 [INFO][5416] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.568 [INFO][5416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:51.575358 containerd[1467]: 2026-01-24 00:43:51.571 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.576440 containerd[1467]: time="2026-01-24T00:43:51.575372091Z" level=info msg="TearDown network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" successfully" Jan 24 00:43:51.576440 containerd[1467]: time="2026-01-24T00:43:51.575405214Z" level=info msg="StopPodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" returns successfully" Jan 24 00:43:51.576690 containerd[1467]: time="2026-01-24T00:43:51.576592535Z" level=info msg="RemovePodSandbox for \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\"" Jan 24 00:43:51.576740 containerd[1467]: time="2026-01-24T00:43:51.576693693Z" level=info msg="Forcibly stopping sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\"" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.651 [WARNING][5434] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3c21f44-8ae7-42a7-a6ec-8b8562e76305", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3002806af1fd439b6b1cb211ca412ff191955b535762c379e59af1dd148bb296", Pod:"coredns-668d6bf9bc-dzdbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibed0514ec7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.658 [INFO][5434] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.658 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" iface="eth0" netns="" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.658 [INFO][5434] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.658 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.722 [INFO][5443] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.723 [INFO][5443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.725 [INFO][5443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.734 [WARNING][5443] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.734 [INFO][5443] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" HandleID="k8s-pod-network.f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Workload="localhost-k8s-coredns--668d6bf9bc--dzdbl-eth0" Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.738 [INFO][5443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:51.749292 containerd[1467]: 2026-01-24 00:43:51.745 [INFO][5434] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4" Jan 24 00:43:51.749292 containerd[1467]: time="2026-01-24T00:43:51.749136817Z" level=info msg="TearDown network for sandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" successfully" Jan 24 00:43:51.757938 containerd[1467]: time="2026-01-24T00:43:51.757733222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:51.758121 containerd[1467]: time="2026-01-24T00:43:51.757982316Z" level=info msg="RemovePodSandbox \"f5bb8cf2dd0f1578b825698261f0671f8a372d69751af7bd8056cd4e9f0f4df4\" returns successfully" Jan 24 00:43:51.759107 containerd[1467]: time="2026-01-24T00:43:51.758995620Z" level=info msg="StopPodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\"" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.830 [WARNING][5461] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" WorkloadEndpoint="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.832 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.832 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" iface="eth0" netns="" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.832 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.832 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.908 [INFO][5470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.908 [INFO][5470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.909 [INFO][5470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.920 [WARNING][5470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.920 [INFO][5470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.923 [INFO][5470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:51.930225 containerd[1467]: 2026-01-24 00:43:51.926 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:51.930225 containerd[1467]: time="2026-01-24T00:43:51.930116694Z" level=info msg="TearDown network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" successfully" Jan 24 00:43:51.930225 containerd[1467]: time="2026-01-24T00:43:51.930147701Z" level=info msg="StopPodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" returns successfully" Jan 24 00:43:51.931196 containerd[1467]: time="2026-01-24T00:43:51.930784646Z" level=info msg="RemovePodSandbox for \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\"" Jan 24 00:43:51.931196 containerd[1467]: time="2026-01-24T00:43:51.930809011Z" level=info msg="Forcibly stopping sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\"" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.011 [WARNING][5488] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" WorkloadEndpoint="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.013 [INFO][5488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.013 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" iface="eth0" netns="" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.013 [INFO][5488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.013 [INFO][5488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.073 [INFO][5496] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.074 [INFO][5496] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.074 [INFO][5496] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.090 [WARNING][5496] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.090 [INFO][5496] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" HandleID="k8s-pod-network.87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Workload="localhost-k8s-whisker--7bd658f48f--9r589-eth0" Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.093 [INFO][5496] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:43:52.100491 containerd[1467]: 2026-01-24 00:43:52.096 [INFO][5488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1" Jan 24 00:43:52.101447 containerd[1467]: time="2026-01-24T00:43:52.100509510Z" level=info msg="TearDown network for sandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" successfully" Jan 24 00:43:52.113445 containerd[1467]: time="2026-01-24T00:43:52.113299666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:43:52.113445 containerd[1467]: time="2026-01-24T00:43:52.113433454Z" level=info msg="RemovePodSandbox \"87deda6f8a6723dd2adad9fcec342eef9a28a70d2b9ae6d16a650566e9a72bd1\" returns successfully" Jan 24 00:43:52.388329 containerd[1467]: time="2026-01-24T00:43:52.387566269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:43:52.468866 containerd[1467]: time="2026-01-24T00:43:52.468727401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:52.472731 containerd[1467]: time="2026-01-24T00:43:52.472582555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:43:52.472731 containerd[1467]: time="2026-01-24T00:43:52.472702248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:43:52.473321 kubelet[2580]: E0124 00:43:52.473201 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:43:52.473777 kubelet[2580]: E0124 00:43:52.473319 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:43:52.473777 kubelet[2580]: E0124 00:43:52.473644 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm8nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664696c7bc-cdlnv_calico-system(d1ac7bc7-7591-48d6-8111-89103d85ee5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:52.474394 containerd[1467]: time="2026-01-24T00:43:52.474195583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:43:52.475412 kubelet[2580]: E0124 00:43:52.474772 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:43:52.544883 containerd[1467]: time="2026-01-24T00:43:52.544683739Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:52.547884 containerd[1467]: time="2026-01-24T00:43:52.547607527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:43:52.547884 containerd[1467]: time="2026-01-24T00:43:52.547725667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:43:52.548342 kubelet[2580]: E0124 00:43:52.547877 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:52.548342 kubelet[2580]: E0124 00:43:52.548131 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:43:52.548682 kubelet[2580]: E0124 00:43:52.548339 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qhtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-h7n7g_calico-apiserver(6a3c8225-48cc-431d-9350-25407dc6fc7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:52.550258 kubelet[2580]: E0124 00:43:52.550001 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:43:55.390747 containerd[1467]: time="2026-01-24T00:43:55.388946426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:43:55.464577 containerd[1467]: time="2026-01-24T00:43:55.464479957Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:55.467107 containerd[1467]: time="2026-01-24T00:43:55.466855718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:43:55.467107 containerd[1467]: time="2026-01-24T00:43:55.467112108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:43:55.467512 kubelet[2580]: E0124 00:43:55.467319 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:43:55.467512 kubelet[2580]: E0124 00:43:55.467395 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:43:55.468728 kubelet[2580]: E0124 00:43:55.467762 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:55.469118 containerd[1467]: time="2026-01-24T00:43:55.467771512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:43:55.586478 containerd[1467]: time="2026-01-24T00:43:55.586249022Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:55.594362 containerd[1467]: time="2026-01-24T00:43:55.594144538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:43:55.594362 containerd[1467]: time="2026-01-24T00:43:55.594365438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:43:55.594813 kubelet[2580]: E0124 00:43:55.594681 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:43:55.594813 kubelet[2580]: E0124 00:43:55.594735 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:43:55.596302 containerd[1467]: time="2026-01-24T00:43:55.595834326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:43:55.597149 kubelet[2580]: E0124 00:43:55.595843 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78mwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p7p9p_calico-system(aa23a976-feaf-4984-bbe7-f5e048e9da19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:55.599865 kubelet[2580]: E0124 00:43:55.599784 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:43:55.686531 containerd[1467]: time="2026-01-24T00:43:55.685102456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:43:55.689990 containerd[1467]: time="2026-01-24T00:43:55.688451171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:43:55.689990 containerd[1467]: time="2026-01-24T00:43:55.688553260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:43:55.690197 kubelet[2580]: E0124 00:43:55.688809 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:43:55.690197 kubelet[2580]: E0124 00:43:55.688878 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:43:55.690197 kubelet[2580]: E0124 00:43:55.689176 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:43:55.690676 kubelet[2580]: E0124 00:43:55.690531 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:43:58.386662 kubelet[2580]: E0124 00:43:58.386507 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:00.448617 kubelet[2580]: E0124 00:44:00.448302 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:44:01.383137 kubelet[2580]: E0124 00:44:01.382846 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:02.087287 systemd[1]: run-containerd-runc-k8s.io-45e21df47c4d3996b18f90d154cf0eb998c3a2f66d4599cb54a132047584ce1a-runc.hvc8sG.mount: Deactivated successfully. Jan 24 00:44:02.262783 kubelet[2580]: E0124 00:44:02.261808 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:02.384522 kubelet[2580]: E0124 00:44:02.384247 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:44:04.381944 kubelet[2580]: E0124 00:44:04.381796 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:04.383289 kubelet[2580]: E0124 00:44:04.383230 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:44:04.449483 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:46884.service - OpenSSH per-connection server daemon (10.0.0.1:46884). Jan 24 00:44:04.517127 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 46884 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:04.519601 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:04.529679 systemd-logind[1447]: New session 10 of user core. Jan 24 00:44:04.538669 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:44:04.787337 sshd[5541]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:04.792786 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:46884.service: Deactivated successfully. Jan 24 00:44:04.796517 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:44:04.799679 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:44:04.802625 systemd-logind[1447]: Removed session 10. Jan 24 00:44:05.384642 kubelet[2580]: E0124 00:44:05.384460 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:44:08.383690 kubelet[2580]: E0124 00:44:08.383552 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:44:09.805627 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:46898.service - OpenSSH per-connection server daemon (10.0.0.1:46898). Jan 24 00:44:09.878064 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 46898 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:09.881312 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:09.892448 systemd-logind[1447]: New session 11 of user core. Jan 24 00:44:09.898386 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:44:10.108211 sshd[5560]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:10.114628 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:46898.service: Deactivated successfully. Jan 24 00:44:10.118521 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:44:10.119851 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:44:10.123097 systemd-logind[1447]: Removed session 11. Jan 24 00:44:10.385022 kubelet[2580]: E0124 00:44:10.384288 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:44:13.386625 containerd[1467]: time="2026-01-24T00:44:13.383067109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:44:13.516731 containerd[1467]: time="2026-01-24T00:44:13.515576709Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:13.520666 containerd[1467]: time="2026-01-24T00:44:13.520550609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:44:13.520780 containerd[1467]: time="2026-01-24T00:44:13.520714224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:44:13.522564 kubelet[2580]: E0124 00:44:13.522512 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:44:13.523801 kubelet[2580]: E0124 00:44:13.523736 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:44:13.524348 kubelet[2580]: E0124 00:44:13.524044 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gjzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-65n6d_calico-apiserver(9becf02e-a8cd-4e6f-92b4-b46fa4218220): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:13.527264 kubelet[2580]: E0124 00:44:13.527118 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:44:15.137295 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:44140.service - OpenSSH per-connection server daemon (10.0.0.1:44140). Jan 24 00:44:15.217620 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 44140 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:15.222025 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:15.238264 systemd-logind[1447]: New session 12 of user core. Jan 24 00:44:15.245582 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:44:15.517760 sshd[5575]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:15.532146 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:44:15.532746 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:44140.service: Deactivated successfully. Jan 24 00:44:15.537090 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:44:15.541377 systemd-logind[1447]: Removed session 12. Jan 24 00:44:17.404615 containerd[1467]: time="2026-01-24T00:44:17.404341492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:44:17.545423 containerd[1467]: time="2026-01-24T00:44:17.543662197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:17.599252 containerd[1467]: time="2026-01-24T00:44:17.598474698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:44:17.618850 containerd[1467]: time="2026-01-24T00:44:17.617851431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:44:17.619782 kubelet[2580]: E0124 00:44:17.619366 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:44:17.619782 kubelet[2580]: E0124 00:44:17.619418 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:44:17.621218 containerd[1467]: time="2026-01-24T00:44:17.620035018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:44:17.622448 kubelet[2580]: E0124 00:44:17.621339 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qhtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-h7n7g_calico-apiserver(6a3c8225-48cc-431d-9350-25407dc6fc7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:17.623744 kubelet[2580]: E0124 00:44:17.622577 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:44:17.844246 containerd[1467]: time="2026-01-24T00:44:17.839998586Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:17.866582 containerd[1467]: time="2026-01-24T00:44:17.860385357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:44:17.889538 containerd[1467]: time="2026-01-24T00:44:17.860854472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:44:17.906005 kubelet[2580]: E0124 00:44:17.905428 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:44:17.911983 kubelet[2580]: E0124 00:44:17.909578 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:44:17.911983 kubelet[2580]: E0124 00:44:17.910990 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6f87900357124d41bee7c9d9fda81593,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:17.923824 containerd[1467]: time="2026-01-24T00:44:17.922516116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:44:18.050821 containerd[1467]: time="2026-01-24T00:44:18.049949514Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:18.084310 containerd[1467]: time="2026-01-24T00:44:18.069479941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:44:18.084492 containerd[1467]: time="2026-01-24T00:44:18.084265617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:44:18.085314 kubelet[2580]: E0124 00:44:18.085124 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:44:18.085314 kubelet[2580]: E0124 00:44:18.085225 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:44:18.085755 kubelet[2580]: E0124 00:44:18.085389 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:18.087747 kubelet[2580]: E0124 00:44:18.087361 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:44:19.384100 containerd[1467]: time="2026-01-24T00:44:19.384061324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:44:19.458461 containerd[1467]: time="2026-01-24T00:44:19.457542451Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:19.461437 containerd[1467]: time="2026-01-24T00:44:19.461002917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:44:19.461437 containerd[1467]: time="2026-01-24T00:44:19.461128151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:44:19.461567 kubelet[2580]: E0124 00:44:19.461486 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:44:19.461567 kubelet[2580]: E0124 00:44:19.461545 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:44:19.462316 kubelet[2580]: E0124 00:44:19.461678 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm8nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664696c7bc-cdlnv_calico-system(d1ac7bc7-7591-48d6-8111-89103d85ee5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:19.463497 kubelet[2580]: E0124 00:44:19.463014 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:44:20.571837 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:44152.service - OpenSSH per-connection server daemon (10.0.0.1:44152). Jan 24 00:44:20.682655 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 44152 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:20.686749 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:20.700629 systemd-logind[1447]: New session 13 of user core. Jan 24 00:44:20.706477 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:44:20.991733 sshd[5599]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:21.003451 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:44152.service: Deactivated successfully. Jan 24 00:44:21.007451 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:44:21.009977 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:44:21.013569 systemd-logind[1447]: Removed session 13. Jan 24 00:44:21.384594 containerd[1467]: time="2026-01-24T00:44:21.384180672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:44:21.476788 containerd[1467]: time="2026-01-24T00:44:21.476670200Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:21.480612 containerd[1467]: time="2026-01-24T00:44:21.480572148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:44:21.481773 containerd[1467]: time="2026-01-24T00:44:21.480756954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:44:21.482075 kubelet[2580]: E0124 00:44:21.481167 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:44:21.482075 kubelet[2580]: E0124 00:44:21.481320 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:44:21.482075 kubelet[2580]: E0124 00:44:21.481508 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78mwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p7p9p_calico-system(aa23a976-feaf-4984-bbe7-f5e048e9da19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:21.484041 kubelet[2580]: E0124 00:44:21.483455 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:44:22.383383 kubelet[2580]: E0124 00:44:22.382760 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:24.390295 containerd[1467]: time="2026-01-24T00:44:24.388017283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:44:24.466187 containerd[1467]: time="2026-01-24T00:44:24.466039850Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:24.484633 containerd[1467]: time="2026-01-24T00:44:24.484042742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:44:24.486808 containerd[1467]: time="2026-01-24T00:44:24.485009993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:44:24.487022 kubelet[2580]: E0124 00:44:24.486477 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:44:24.487022 kubelet[2580]: E0124 00:44:24.486627 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:44:24.491398 kubelet[2580]: E0124 00:44:24.491172 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:24.496129 containerd[1467]: time="2026-01-24T00:44:24.495825314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:44:24.594630 containerd[1467]: time="2026-01-24T00:44:24.594536528Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:44:24.607989 containerd[1467]: time="2026-01-24T00:44:24.605510134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:44:24.607989 containerd[1467]: time="2026-01-24T00:44:24.606284177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:44:24.608868 kubelet[2580]: E0124 00:44:24.608374 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:44:24.608868 kubelet[2580]: E0124 00:44:24.608444 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:44:24.608868 kubelet[2580]: E0124 00:44:24.608702 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:44:24.610559 kubelet[2580]: E0124 00:44:24.610433 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:44:26.011459 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:45426.service - OpenSSH per-connection server daemon (10.0.0.1:45426). Jan 24 00:44:26.150704 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 45426 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:26.155185 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:26.169717 systemd-logind[1447]: New session 14 of user core. Jan 24 00:44:26.178296 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:44:26.387715 kubelet[2580]: E0124 00:44:26.387668 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:44:26.585752 sshd[5619]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:26.594186 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:45426.service: Deactivated successfully. Jan 24 00:44:26.597529 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:44:26.600571 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:44:26.603832 systemd-logind[1447]: Removed session 14. Jan 24 00:44:29.384660 kubelet[2580]: E0124 00:44:29.383621 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:44:29.386423 kubelet[2580]: E0124 00:44:29.386379 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:44:31.383677 kubelet[2580]: E0124 00:44:31.383218 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:44:31.697159 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:45428.service - OpenSSH per-connection server daemon (10.0.0.1:45428). Jan 24 00:44:31.775486 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 45428 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:31.779233 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:31.801094 systemd-logind[1447]: New session 15 of user core. Jan 24 00:44:31.811032 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:44:32.279607 sshd[5635]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:32.296030 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:45428.service: Deactivated successfully. Jan 24 00:44:32.300202 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:44:32.305740 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:44:32.310362 systemd-logind[1447]: Removed session 15. Jan 24 00:44:32.386186 kubelet[2580]: E0124 00:44:32.385880 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:34.398025 kubelet[2580]: E0124 00:44:34.395577 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:44:37.437565 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:36150.service - OpenSSH per-connection server daemon (10.0.0.1:36150). Jan 24 00:44:37.462623 kubelet[2580]: E0124 00:44:37.462478 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:44:37.567316 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 36150 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:37.572485 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:37.584794 systemd-logind[1447]: New session 16 of user core. Jan 24 00:44:37.599196 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:44:38.015425 sshd[5674]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:38.025601 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:36150.service: Deactivated successfully. Jan 24 00:44:38.032379 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:44:38.040464 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:44:38.043842 systemd-logind[1447]: Removed session 16. Jan 24 00:44:39.087221 kubelet[2580]: E0124 00:44:39.083712 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:44:43.078503 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:36160.service - OpenSSH per-connection server daemon (10.0.0.1:36160). Jan 24 00:44:43.130801 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 36160 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:43.133664 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:43.161589 systemd-logind[1447]: New session 17 of user core. Jan 24 00:44:43.172850 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:44:43.388606 kubelet[2580]: E0124 00:44:43.386255 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:44:43.530577 sshd[5690]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:43.567032 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:36160.service: Deactivated successfully. Jan 24 00:44:43.572123 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:44:43.582676 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:44:43.598620 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:36166.service - OpenSSH per-connection server daemon (10.0.0.1:36166). Jan 24 00:44:43.601106 systemd-logind[1447]: Removed session 17. Jan 24 00:44:43.660470 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 36166 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:43.668811 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:43.689601 systemd-logind[1447]: New session 18 of user core. Jan 24 00:44:43.694024 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:44:44.151101 sshd[5705]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:44.172566 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:36166.service: Deactivated successfully. Jan 24 00:44:44.180839 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:44:44.186516 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:44:44.213960 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). Jan 24 00:44:44.242038 systemd-logind[1447]: Removed session 18. Jan 24 00:44:44.287003 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:44.294745 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:44.309589 systemd-logind[1447]: New session 19 of user core. Jan 24 00:44:44.317626 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:44:44.414482 kubelet[2580]: E0124 00:44:44.413595 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:44:44.427172 kubelet[2580]: E0124 00:44:44.416264 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:44:44.614984 sshd[5717]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:44.622512 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:36176.service: Deactivated successfully. Jan 24 00:44:44.626233 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:44:44.629595 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:44:44.632382 systemd-logind[1447]: Removed session 19. Jan 24 00:44:48.421248 kubelet[2580]: E0124 00:44:48.421155 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:44:49.684790 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Jan 24 00:44:49.782057 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:49.782294 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:49.799587 systemd-logind[1447]: New session 20 of user core. Jan 24 00:44:49.808244 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:44:50.074814 sshd[5733]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:50.085389 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:44:50.086826 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:45342.service: Deactivated successfully. Jan 24 00:44:50.096226 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:44:50.102031 systemd-logind[1447]: Removed session 20. Jan 24 00:44:50.391853 kubelet[2580]: E0124 00:44:50.389491 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:44:51.391077 kubelet[2580]: E0124 00:44:51.390884 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:44:55.120532 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:50910.service - OpenSSH per-connection server daemon (10.0.0.1:50910). Jan 24 00:44:55.250599 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 50910 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:44:55.252478 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:55.273784 systemd-logind[1447]: New session 21 of user core. Jan 24 00:44:55.298492 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:44:55.707239 sshd[5749]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:55.724454 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:50910.service: Deactivated successfully. Jan 24 00:44:55.727877 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:44:55.730030 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:44:55.732686 systemd-logind[1447]: Removed session 21. Jan 24 00:44:56.384050 kubelet[2580]: E0124 00:44:56.383521 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:56.391239 kubelet[2580]: E0124 00:44:56.391081 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:44:56.398311 kubelet[2580]: E0124 00:44:56.397299 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:44:57.385626 kubelet[2580]: E0124 00:44:57.384622 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:44:57.385626 kubelet[2580]: E0124 00:44:57.385565 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:45:00.780981 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). Jan 24 00:45:00.880031 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:00.883833 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:00.928376 systemd-logind[1447]: New session 22 of user core. Jan 24 00:45:00.938169 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:45:01.396460 kubelet[2580]: E0124 00:45:01.394005 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:45:01.397762 kubelet[2580]: E0124 00:45:01.397687 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:45:01.505298 sshd[5779]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:01.523853 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:50922.service: Deactivated successfully. Jan 24 00:45:01.551463 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:45:01.556259 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:45:01.559779 systemd-logind[1447]: Removed session 22. Jan 24 00:45:06.400500 kubelet[2580]: E0124 00:45:06.398703 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:06.401098 containerd[1467]: time="2026-01-24T00:45:06.399756514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:06.510341 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:34576.service - OpenSSH per-connection server daemon (10.0.0.1:34576). Jan 24 00:45:06.512989 containerd[1467]: time="2026-01-24T00:45:06.512021332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:06.518298 containerd[1467]: time="2026-01-24T00:45:06.518131383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:06.518698 containerd[1467]: time="2026-01-24T00:45:06.518181043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:06.527800 kubelet[2580]: E0124 00:45:06.522651 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:06.527800 kubelet[2580]: E0124 00:45:06.522723 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:06.527800 kubelet[2580]: E0124 00:45:06.522966 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gjzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-65n6d_calico-apiserver(9becf02e-a8cd-4e6f-92b4-b46fa4218220): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:06.527800 kubelet[2580]: E0124 00:45:06.524684 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:45:06.611289 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 34576 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:06.605882 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:06.620735 systemd-logind[1447]: New session 23 of user core. Jan 24 00:45:06.632151 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:45:06.917883 sshd[5816]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:06.922149 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:34576.service: Deactivated successfully. Jan 24 00:45:06.925290 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:45:06.928754 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:45:06.931389 systemd-logind[1447]: Removed session 23. Jan 24 00:45:10.383875 kubelet[2580]: E0124 00:45:10.383791 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:11.384140 containerd[1467]: time="2026-01-24T00:45:11.384087201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:11.467213 containerd[1467]: time="2026-01-24T00:45:11.466802539Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:11.469219 containerd[1467]: time="2026-01-24T00:45:11.469117086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:11.469329 containerd[1467]: time="2026-01-24T00:45:11.469247138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:11.469607 kubelet[2580]: E0124 00:45:11.469568 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:11.470297 kubelet[2580]: E0124 00:45:11.470207 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:11.470863 kubelet[2580]: E0124 00:45:11.470772 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qhtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6fc4d58c87-h7n7g_calico-apiserver(6a3c8225-48cc-431d-9350-25407dc6fc7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:11.471098 containerd[1467]: time="2026-01-24T00:45:11.470814817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:45:11.473133 kubelet[2580]: E0124 00:45:11.473101 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:45:11.550294 containerd[1467]: time="2026-01-24T00:45:11.550239614Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:11.564667 containerd[1467]: time="2026-01-24T00:45:11.563230660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:45:11.564667 containerd[1467]: time="2026-01-24T00:45:11.563390206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:45:11.564869 kubelet[2580]: E0124 00:45:11.563717 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:11.564869 kubelet[2580]: E0124 00:45:11.563788 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:11.564869 kubelet[2580]: E0124 00:45:11.564003 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6f87900357124d41bee7c9d9fda81593,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:11.568306 containerd[1467]: time="2026-01-24T00:45:11.567738279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:45:11.664648 containerd[1467]: time="2026-01-24T00:45:11.663679295Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:11.668613 containerd[1467]: time="2026-01-24T00:45:11.668219817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:45:11.668613 containerd[1467]: time="2026-01-24T00:45:11.668377517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:45:11.668987 kubelet[2580]: E0124 00:45:11.668803 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:11.668987 kubelet[2580]: E0124 00:45:11.668871 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:11.669262 kubelet[2580]: E0124 00:45:11.669191 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9rv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7986b6bf7-c6tcd_calico-system(56fae327-01cc-4cd0-849f-72d480e4300e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:11.670936 kubelet[2580]: E0124 00:45:11.670639 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:45:11.979078 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:34578.service - OpenSSH per-connection server daemon (10.0.0.1:34578). Jan 24 00:45:12.052170 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 34578 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:12.065617 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:12.077425 systemd-logind[1447]: New session 24 of user core. Jan 24 00:45:12.104788 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:45:12.390831 containerd[1467]: time="2026-01-24T00:45:12.390346707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:45:12.487285 containerd[1467]: time="2026-01-24T00:45:12.487017018Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:12.490202 sshd[5837]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:12.491659 containerd[1467]: time="2026-01-24T00:45:12.490884768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:45:12.491659 containerd[1467]: time="2026-01-24T00:45:12.490961237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:45:12.492169 kubelet[2580]: E0124 00:45:12.491849 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:12.492169 kubelet[2580]: E0124 00:45:12.492024 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:12.493072 kubelet[2580]: E0124 00:45:12.492199 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm8nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-664696c7bc-cdlnv_calico-system(d1ac7bc7-7591-48d6-8111-89103d85ee5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:12.494847 kubelet[2580]: E0124 00:45:12.494180 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:45:12.501350 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:34578.service: Deactivated successfully. Jan 24 00:45:12.505535 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:45:12.509154 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:45:12.513415 systemd-logind[1447]: Removed session 24. Jan 24 00:45:15.815969 containerd[1467]: time="2026-01-24T00:45:15.815168895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:45:15.945245 containerd[1467]: time="2026-01-24T00:45:15.943969060Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:15.992605 containerd[1467]: time="2026-01-24T00:45:15.991788307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:45:15.992605 containerd[1467]: time="2026-01-24T00:45:15.991961487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:15.997614 kubelet[2580]: E0124 00:45:15.993866 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:15.997614 kubelet[2580]: E0124 00:45:15.994033 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:15.997614 kubelet[2580]: E0124 00:45:15.994240 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78mwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p7p9p_calico-system(aa23a976-feaf-4984-bbe7-f5e048e9da19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:16.000739 kubelet[2580]: E0124 00:45:15.999865 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:45:16.398155 containerd[1467]: time="2026-01-24T00:45:16.397159880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:45:16.490326 containerd[1467]: time="2026-01-24T00:45:16.490042551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:16.495154 containerd[1467]: time="2026-01-24T00:45:16.495069638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:45:16.495322 containerd[1467]: time="2026-01-24T00:45:16.495185973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:45:16.495650 kubelet[2580]: E0124 00:45:16.495423 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:16.495730 kubelet[2580]: E0124 00:45:16.495613 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:16.496631 kubelet[2580]: E0124 00:45:16.495861 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:16.502847 containerd[1467]: time="2026-01-24T00:45:16.501589119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:45:16.586076 containerd[1467]: time="2026-01-24T00:45:16.585980606Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:16.589503 containerd[1467]: time="2026-01-24T00:45:16.589393409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:45:16.589628 containerd[1467]: time="2026-01-24T00:45:16.589563373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:45:16.590767 kubelet[2580]: E0124 00:45:16.590551 2580 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:16.591225 kubelet[2580]: E0124 00:45:16.591144 2580 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:16.594498 kubelet[2580]: E0124 00:45:16.594358 2580 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs859,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xzhgv_calico-system(3935770d-1f88-434a-a13a-250f66f25ebf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:16.596528 kubelet[2580]: E0124 00:45:16.596349 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:45:17.548208 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:50160.service - OpenSSH per-connection server daemon (10.0.0.1:50160). Jan 24 00:45:17.704720 sshd[5869]: Accepted publickey for core from 10.0.0.1 port 50160 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:17.719719 sshd[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:17.761552 systemd-logind[1447]: New session 25 of user core. Jan 24 00:45:17.783187 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:45:18.125782 sshd[5869]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:18.138175 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:50160.service: Deactivated successfully. Jan 24 00:45:18.142263 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:45:18.144545 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:45:18.147768 systemd-logind[1447]: Removed session 25. Jan 24 00:45:19.384025 kubelet[2580]: E0124 00:45:19.381748 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:20.393497 kubelet[2580]: E0124 00:45:20.393167 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:45:21.383520 kubelet[2580]: E0124 00:45:21.381864 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:22.384639 kubelet[2580]: E0124 00:45:22.384351 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:45:23.168835 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:50176.service - OpenSSH per-connection server daemon (10.0.0.1:50176). Jan 24 00:45:23.237017 sshd[5884]: Accepted publickey for core from 10.0.0.1 port 50176 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:23.240975 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:23.253356 systemd-logind[1447]: New session 26 of user core. Jan 24 00:45:23.267505 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:45:23.455601 sshd[5884]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:23.463148 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:50176.service: Deactivated successfully. Jan 24 00:45:23.465827 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:45:23.467643 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:45:23.470399 systemd-logind[1447]: Removed session 26. Jan 24 00:45:25.393397 kubelet[2580]: E0124 00:45:25.393136 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:45:27.388785 kubelet[2580]: E0124 00:45:27.387231 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:45:28.497743 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:41918.service - OpenSSH per-connection server daemon (10.0.0.1:41918). Jan 24 00:45:28.603016 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 41918 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:28.607646 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:28.624847 systemd-logind[1447]: New session 27 of user core. Jan 24 00:45:28.640597 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:45:28.895495 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:28.908176 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:41918.service: Deactivated successfully. Jan 24 00:45:28.911986 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:45:28.918324 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:45:28.927697 systemd-logind[1447]: Removed session 27. Jan 24 00:45:29.386997 kubelet[2580]: E0124 00:45:29.386154 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:45:29.390385 kubelet[2580]: E0124 00:45:29.389677 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:45:33.386078 kubelet[2580]: E0124 00:45:33.384502 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:45:33.940621 systemd[1]: Started sshd@27-10.0.0.28:22-10.0.0.1:41926.service - OpenSSH per-connection server daemon (10.0.0.1:41926). Jan 24 00:45:34.023979 sshd[5941]: Accepted publickey for core from 10.0.0.1 port 41926 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:34.029581 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:34.050862 systemd-logind[1447]: New session 28 of user core. Jan 24 00:45:34.060005 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 00:45:34.308822 sshd[5941]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:34.324808 systemd[1]: sshd@27-10.0.0.28:22-10.0.0.1:41926.service: Deactivated successfully. Jan 24 00:45:34.329571 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 00:45:34.332676 systemd-logind[1447]: Session 28 logged out. Waiting for processes to exit. Jan 24 00:45:34.341958 systemd[1]: Started sshd@28-10.0.0.28:22-10.0.0.1:41938.service - OpenSSH per-connection server daemon (10.0.0.1:41938). Jan 24 00:45:34.344716 systemd-logind[1447]: Removed session 28. Jan 24 00:45:34.396125 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 41938 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:34.398266 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:34.407082 systemd-logind[1447]: New session 29 of user core. Jan 24 00:45:34.417795 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 00:45:35.114134 sshd[5956]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:35.139497 systemd[1]: Started sshd@29-10.0.0.28:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Jan 24 00:45:35.140504 systemd[1]: sshd@28-10.0.0.28:22-10.0.0.1:41938.service: Deactivated successfully. Jan 24 00:45:35.144524 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 00:45:35.150234 systemd-logind[1447]: Session 29 logged out. Waiting for processes to exit. Jan 24 00:45:35.158416 systemd-logind[1447]: Removed session 29. Jan 24 00:45:35.266674 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:35.271857 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:35.291786 systemd-logind[1447]: New session 30 of user core. Jan 24 00:45:35.305184 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 00:45:36.482197 sshd[5967]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:36.495807 systemd[1]: Started sshd@30-10.0.0.28:22-10.0.0.1:39346.service - OpenSSH per-connection server daemon (10.0.0.1:39346). Jan 24 00:45:36.497397 systemd[1]: sshd@29-10.0.0.28:22-10.0.0.1:39336.service: Deactivated successfully. Jan 24 00:45:36.502517 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 00:45:36.505145 systemd-logind[1447]: Session 30 logged out. Waiting for processes to exit. Jan 24 00:45:36.510788 systemd-logind[1447]: Removed session 30. Jan 24 00:45:36.576000 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 39346 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:36.579485 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:36.592257 systemd-logind[1447]: New session 31 of user core. Jan 24 00:45:36.607246 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 00:45:36.992130 sshd[5991]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:37.008161 systemd[1]: sshd@30-10.0.0.28:22-10.0.0.1:39346.service: Deactivated successfully. Jan 24 00:45:37.012071 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 00:45:37.017847 systemd-logind[1447]: Session 31 logged out. Waiting for processes to exit. Jan 24 00:45:37.025858 systemd[1]: Started sshd@31-10.0.0.28:22-10.0.0.1:39358.service - OpenSSH per-connection server daemon (10.0.0.1:39358). Jan 24 00:45:37.027613 systemd-logind[1447]: Removed session 31. Jan 24 00:45:37.075882 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 39358 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:37.079529 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:37.092740 systemd-logind[1447]: New session 32 of user core. Jan 24 00:45:37.111311 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 00:45:37.331673 sshd[6006]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:37.338583 systemd[1]: sshd@31-10.0.0.28:22-10.0.0.1:39358.service: Deactivated successfully. Jan 24 00:45:37.342789 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 00:45:37.344475 systemd-logind[1447]: Session 32 logged out. Waiting for processes to exit. Jan 24 00:45:37.346816 systemd-logind[1447]: Removed session 32. Jan 24 00:45:37.385008 kubelet[2580]: E0124 00:45:37.384806 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:37.386209 kubelet[2580]: E0124 00:45:37.386137 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:45:37.387994 kubelet[2580]: E0124 00:45:37.387783 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:45:39.506214 kubelet[2580]: E0124 00:45:39.505678 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:45:40.384525 kubelet[2580]: E0124 00:45:40.382282 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:40.392514 kubelet[2580]: E0124 00:45:40.392265 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:45:41.389024 kubelet[2580]: E0124 00:45:41.388772 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:45:42.407028 systemd[1]: Started sshd@32-10.0.0.28:22-10.0.0.1:39368.service - OpenSSH per-connection server daemon (10.0.0.1:39368). Jan 24 00:45:42.549143 sshd[6021]: Accepted publickey for core from 10.0.0.1 port 39368 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:42.552505 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:42.570547 systemd-logind[1447]: New session 33 of user core. Jan 24 00:45:42.586653 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 24 00:45:42.850386 sshd[6021]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:42.865564 systemd[1]: sshd@32-10.0.0.28:22-10.0.0.1:39368.service: Deactivated successfully. Jan 24 00:45:42.873793 systemd[1]: session-33.scope: Deactivated successfully. Jan 24 00:45:42.881084 systemd-logind[1447]: Session 33 logged out. Waiting for processes to exit. Jan 24 00:45:42.889502 systemd-logind[1447]: Removed session 33. Jan 24 00:45:47.384741 kubelet[2580]: E0124 00:45:47.384601 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:45:47.904799 systemd[1]: Started sshd@33-10.0.0.28:22-10.0.0.1:47182.service - OpenSSH per-connection server daemon (10.0.0.1:47182). Jan 24 00:45:47.953645 sshd[6035]: Accepted publickey for core from 10.0.0.1 port 47182 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:47.956266 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:47.971005 systemd-logind[1447]: New session 34 of user core. Jan 24 00:45:47.979439 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 24 00:45:48.201071 sshd[6035]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:48.207624 systemd[1]: sshd@33-10.0.0.28:22-10.0.0.1:47182.service: Deactivated successfully. Jan 24 00:45:48.215511 systemd[1]: session-34.scope: Deactivated successfully. Jan 24 00:45:48.219807 systemd-logind[1447]: Session 34 logged out. Waiting for processes to exit. Jan 24 00:45:48.224673 systemd-logind[1447]: Removed session 34. Jan 24 00:45:49.386658 kubelet[2580]: E0124 00:45:49.386531 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:45:51.386231 kubelet[2580]: E0124 00:45:51.386170 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:45:52.393826 kubelet[2580]: E0124 00:45:52.393429 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:45:53.241578 systemd[1]: Started sshd@34-10.0.0.28:22-10.0.0.1:47190.service - OpenSSH per-connection server daemon (10.0.0.1:47190). Jan 24 00:45:53.314355 sshd[6051]: Accepted publickey for core from 10.0.0.1 port 47190 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:53.317328 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:53.329702 systemd-logind[1447]: New session 35 of user core. Jan 24 00:45:53.339103 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 24 00:45:53.385048 kubelet[2580]: E0124 00:45:53.384412 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:45:53.386622 kubelet[2580]: E0124 00:45:53.386577 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:45:54.004054 sshd[6051]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:54.011348 systemd-logind[1447]: Session 35 logged out. Waiting for processes to exit. Jan 24 00:45:54.011581 systemd[1]: sshd@34-10.0.0.28:22-10.0.0.1:47190.service: Deactivated successfully. Jan 24 00:45:54.014426 systemd[1]: session-35.scope: Deactivated successfully. Jan 24 00:45:54.024381 systemd-logind[1447]: Removed session 35. Jan 24 00:45:59.049439 systemd[1]: Started sshd@35-10.0.0.28:22-10.0.0.1:47388.service - OpenSSH per-connection server daemon (10.0.0.1:47388). Jan 24 00:45:59.144309 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 47388 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:45:59.147558 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:59.171172 systemd-logind[1447]: New session 36 of user core. Jan 24 00:45:59.181487 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 24 00:45:59.480160 sshd[6070]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:59.490561 systemd[1]: sshd@35-10.0.0.28:22-10.0.0.1:47388.service: Deactivated successfully. Jan 24 00:45:59.495435 systemd[1]: session-36.scope: Deactivated successfully. Jan 24 00:45:59.498874 systemd-logind[1447]: Session 36 logged out. Waiting for processes to exit. Jan 24 00:45:59.502038 systemd-logind[1447]: Removed session 36. Jan 24 00:46:02.393650 kubelet[2580]: E0124 00:46:02.393588 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-65n6d" podUID="9becf02e-a8cd-4e6f-92b4-b46fa4218220" Jan 24 00:46:04.404675 kubelet[2580]: E0124 00:46:04.404521 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6fc4d58c87-h7n7g" podUID="6a3c8225-48cc-431d-9350-25407dc6fc7b" Jan 24 00:46:04.406766 kubelet[2580]: E0124 00:46:04.406520 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xzhgv" podUID="3935770d-1f88-434a-a13a-250f66f25ebf" Jan 24 00:46:04.531523 systemd[1]: Started sshd@36-10.0.0.28:22-10.0.0.1:42864.service - OpenSSH per-connection server daemon (10.0.0.1:42864). Jan 24 00:46:04.606233 sshd[6107]: Accepted publickey for core from 10.0.0.1 port 42864 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:46:04.614638 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:04.635878 systemd-logind[1447]: New session 37 of user core. Jan 24 00:46:04.654713 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 24 00:46:04.927520 sshd[6107]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:04.936549 systemd[1]: sshd@36-10.0.0.28:22-10.0.0.1:42864.service: Deactivated successfully. Jan 24 00:46:04.941743 systemd[1]: session-37.scope: Deactivated successfully. Jan 24 00:46:04.945826 systemd-logind[1447]: Session 37 logged out. Waiting for processes to exit. Jan 24 00:46:04.950676 systemd-logind[1447]: Removed session 37. Jan 24 00:46:06.392452 kubelet[2580]: E0124 00:46:06.391225 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p7p9p" podUID="aa23a976-feaf-4984-bbe7-f5e048e9da19" Jan 24 00:46:06.395393 kubelet[2580]: E0124 00:46:06.394022 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7986b6bf7-c6tcd" podUID="56fae327-01cc-4cd0-849f-72d480e4300e" Jan 24 00:46:07.388524 kubelet[2580]: E0124 00:46:07.386057 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664696c7bc-cdlnv" podUID="d1ac7bc7-7591-48d6-8111-89103d85ee5f" Jan 24 00:46:08.383694 kubelet[2580]: E0124 00:46:08.383502 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:09.964469 systemd[1]: Started sshd@37-10.0.0.28:22-10.0.0.1:42872.service - OpenSSH per-connection server daemon (10.0.0.1:42872). Jan 24 00:46:10.050496 sshd[6121]: Accepted publickey for core from 10.0.0.1 port 42872 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:46:10.055878 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:46:10.068871 systemd-logind[1447]: New session 38 of user core. Jan 24 00:46:10.074713 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 24 00:46:10.335157 sshd[6121]: pam_unix(sshd:session): session closed for user core Jan 24 00:46:10.340338 systemd[1]: sshd@37-10.0.0.28:22-10.0.0.1:42872.service: Deactivated successfully. Jan 24 00:46:10.347849 systemd[1]: session-38.scope: Deactivated successfully. Jan 24 00:46:10.351691 systemd-logind[1447]: Session 38 logged out. Waiting for processes to exit. Jan 24 00:46:10.358704 systemd-logind[1447]: Removed session 38.